Search Results

Search found 4140 results on 166 pages for 'alias analysis'.

Page 55/166 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Tab Auto-Completion in Mac OS X when using sftp in terminal

    - by AlanTuring
    i have been getting very frustrated lately since the readline functionality has been removed from MacOSX and Tab Auto-Completion doesn't work anymore. So i was wondering if anyone knew a good alternative to use that i could install so i can tab auto-complete files when sftp'd in. I heard that with-readline is a good option for this. If so, how do i get an alias sftp = with-readline sftp to work? I would like to do the same with any other option that isn't with-readline, so i don't have to assign an alias each time i set up a session. I am using Mac OS X 10.8(Mountain Lion) with Homebrew installed. Thanks in advance to anyone who can help me.

    Read the article

  • Close lid option "Do nothing" absent

    - by Jaap
    I just unpacked my new Sony Vaio, with windows 8 pro installed. Everything is nice, so I tried setting my power management options. The "When I close the lid" option only list: Hibernate Sleep Shutdown The "Do nothing" option is not present. I've seen loads of stuff on google where people ask or explain how to set this option to "Do nothing", but in all my power plans this option is absent... Can I use a tool to prevent this, or is there a way to force windows to show me this option (and that it actually works)? UPDATE powercf /q guid gives me this output: Subgroup GUID: 4f971e89-eebd-4455-a8de-9e59040e7347 (Power buttons and lid) GUID Alias: SUB_BUTTONS Power Setting GUID: 5ca83367-6e45-459f-a27b-476b1d01c936 (Lid close action) GUID Alias: LIDACTION Possible Setting Index: 000 Possible Setting Friendly Name: Sleep Possible Setting Index: 001 Possible Setting Friendly Name: Hibernate Possible Setting Index: 002 Possible Setting Friendly Name: Shut down Current AC Power Setting Index: 0x00000000 Current DC Power Setting Index: 0x00000000 Other sections have different enumerations where 000 stands for No action

    Read the article

  • Changing languages rapidly causes Linux to crash.

    - by eZanmoto
    So I'm running Xmonad on my college computer (which runs Kubuntu) and whenever I leave my desk, instead of using x-screensaver which is incredibly buggy and slow, I just change to another workstation, open a terminal and change language to a language which uses symbols instead of letters, and then change back using an aliased command. For example, my .profile has the lines alias qwer="setxkbmap jp" alias *******="setxkbmap ie" where ******* is my password, using japanese characters. Changing languages seems to be much faster than running x-screensaver. The problem: rapidly changing languages seems to crash Linux; it just won't accept input (and it's not because the language hasn't changed back, nothing is output to the console). I can't use Ctrl+Alt+F1..F7, I can't "raise the elephants", anything, it just won't work. I'm just wondering, is this a known issue, and if so, is there something I can do about it?

    Read the article

  • Mac OS X 10.6 executable not found without full path

    - by Danack
    I just installed Apache via MacPorts. It seems that my Mac was absolutely confused about which version of the Apache executables to run. After moving the Apache executables that ship with the Mac to a directory that is not listed in the PATH variable, trying to run the httpd built by MacPorts fails even though the correct directory (/opt/local/apache2/bin) is listed in the PATH variable. If I navigate to the directory /opt/local/apache2/bin and type the command httpd I still get the error message -bash: httpd: command not found If I type the command with the full path /opt/local/apache2/bin/httpd it works fine. I've run the command alias to see if something was clashing but the only thing listed is: alias wget='curl -O' How do I find what is intercepting the command and preventing the executable being found in the directory, even when I'm inside the same directory? By the way, the httpd file is executable: -rwxr-xr-x 1 root admin 442496 9 May 2012 httpd

    Read the article

  • Per-user vhost logging

    - by kojiro
    I have a working per-user virtual host configuration with Apache, but I would like each user to have access to the logs for his virtual hosts. Obviously the ErrorLog and CustomLog directives don't accept the wildcard syntax that VirtualDocumentRoot does, but is there a way to achieve logs in each user's directory? <VirtualHost *:80> ServerName *.example.com ServerAdmin [email protected] VirtualDocumentRoot /home/%2/projects/%1 <Directory /home/*/projects/> Options FollowSymlinks Indexes IndexOptions FancyIndexing FoldersFirst AllowOverride All Order Allow,Deny Allow From All Satisfy Any </Directory> Alias /favicon.ico /var/www/default/favicon.ico Alias /robots.txt /var/www/default/robots.txt LogLevel warn # ErrorLog /home/%2/logs/%1.error.log # CustomLog /home/%2/logs/%1.access.log combined </VirtualHost>

    Read the article

  • How can I get bash to perform tab-completion for my aliases?

    - by dstarh
    I have a bunch of bash completion scripts set up (mostly using bash-it and some manually setup). I also have a bunch of aliases setup for common tasks like gco for git checkout. Right now I can type git checkout dTab and develop is completed for me but when I type gco dTab it does not complete. I'm assuming this is because the completion script is completing on git and it fails to see gco. Is there a way to generically/programmatically get all of my completion scripts to work with my aliases? Not being able to complete when using the alias kind of defeats the purpose of the alias.

    Read the article

  • downgrading the php version

    - by aadiahg
    I used to upgrade my php from 5.3.5-1ubuntu7.11 to 5.3.18-1~dotdeb.0 I got a lot of problems after the upgrading process . My localhost/phpmyadmin displaying the blank screen. apache2 show me a warning message waiting [Sun Nov 04 12:11:21 2012] [warn] The Alias directive in /etc/apache2/conf.d/phpmyadmin.conf at line 3 will probably never match because it overlaps an earlier Alias. most of CMS cant installed and show me this error Required MySQL version for CMS is 5.x but this server has: mysqlnd 5.0.8-dev the Scripts that has been already installed some of there functions doesn’t work i've googled a lot to fix this problems also i ve googled about downgrading the php version from 5.3.18 to 5.3.x but it doesn’t work with me Can you help please Many thanks

    Read the article

  • Make interchangeable class types via pointer casting only, without having to allocate any new objects?

    - by HostileFork
    UPDATE: I do appreciate "don't want that, want this instead" suggestions. They are useful, especially when provided in context of the motivating scenario. Still...regardless of goodness/badness, I've become curious to find a hard-and-fast "yes that can be done legally in C++11" vs "no it is not possible to do something like that". I want to "alias" an object pointer as another type, for the sole purpose of adding some helper methods. The alias cannot add data members to the underlying class (in fact, the more I can prevent that from happening the better!) All aliases are equally applicable to any object of this type...it's just helpful if the type system can hint which alias is likely the most appropriate. There should be no information about any specific alias that is ever encoded in the underlying object. Hence, I feel like you should be able to "cheat" the type system and just let it be an annotation...checked at compile time, but ultimately irrelevant to the runtime casting. Something along these lines: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Under the hood, the factory method is actually making a NodeBase class, and then using a similar reinterpret_cast to return it as a Node<AccessorFoo>*. The easy way to avoid this is to make these lightweight classes that wrap nodes and are passed around by value. Thus you don't need casting, just Accessor classes that take the node handle to wrap in their constructor: AccessorFoo foo (NodeBase::createViaFactory()); AccessorBar bar (foo.getNode()); But if I don't have to pay for all that, I don't want to. That would involve--for instance--making a special accessor type for each sort of wrapped pointer (AccessorFooShared, AccessorFooUnique, AccessorFooWeak, etc.) Having these typed pointers being aliased for one single pointer-based object identity is preferable, and provides a nice orthogonality. So back to that original question: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Seems like there would be some way to do this that might be ugly but not "break the rules". According to ISO14882:2011(e) 5.2.10-7: An object pointer can be explicitly converted to an object pointer of a different type.70 When a prvalue v of type "pointer to T1" is converted to the type "pointer to cv T2", the result is static_cast(static_cast(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type "pointer to T1" to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified. Drilling into the definition of a "standard-layout class", we find: has no non-static data members of type non-standard-layout-class (or array of such types) or reference, and has no virtual functions (10.3) and no virtual base classes (10.1), and has the same access control (clause 11) for all non-static data members, and has no non-standard-layout base classes, and either has no non-static data member in the most-derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. Sounds like working with something like this would tie my hands a bit with no virtual methods in the accessors or the node. Yet C++11 apparently has std::is_standard_layout to keep things checked. Can this be done safely? Appears to work in gcc-4.7, but I'd like to be sure I'm not invoking undefined behavior.

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

  • Domain registration and DNS, what am I actually paying for? [on hold]

    - by jozxyqk
    Long story short I'm quite confused as to exactly what is offered by domain registration and dns service sites. When I go to the url "http://google.com", my PC connects to a name server and gets the IP for "google.com", then connects to the IP and says, give me the page for "http://google.com". AFAIK there are many name servers and they all cache these bits of information in some hierarchical network, but ultimately a DNS record must come from a single source (not sure what this is called). There are different kinds of records, that might not an IP but an alias/redirect to other records for example. Lets say I want my own domain name for some server. Maybe it even has a static IP but I want a nicer thing for people to remember, or my ISP assigns dynamic IPs and I want a URL that always works, or my website is hosted on a shared machine so the browser needs to send "http://mydnsname.com" to the webserver to distinguish it from other requests to the same IP but for different sites. Registering a domain costs a small amount of money per year. Where does this money go, not that I'm complaining :P? Is that really all it costs to maintain the entire DNS system of nameservers? If I just register the domain and nothing else, what do I get? Is that just reserving a name or hosting WHOIS information or have I paid for a dns recrord to be hosted? Can a domain alone have a record, such as an IP or be an alias to another? A bunch of sites out there offer other services, in addition to domain registration (I'm assuming they register the domain through another party for me). One example is "dynamic DNS" (DDNS), but isn't this just a regular DNS record that's updated regularly? Does it cost extra to update more often? Without a DDNS, can a DNS record still point to an IP? I've also seen the term "managed DNS" and have no idea where that fits in.

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Bakhtiyor
    I have mailserver configure using dovecot+postfix+mysql and it was runnig fine in the server(Ubuntu Server). But during last week it stopped working correctly. It doesn't send email. When I try to telnet localhost smtp I'm connecting successfully but when I do mail from:<[email protected]> and hit Enter it hangs on, nothing happen. Having reviewed /var/log/mail.log file I've found out that probably(99%) the problem is on postfix when it is trying to connect to MySQL server. If you see the log file given below you can see that it says Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). Nov 14 21:54:36 ns1 dovecot: dovecot: Killed with signal 15 (by pid=7731 uid=0 code=kill) Nov 14 21:54:36 ns1 dovecot: Dovecot v1.2.9 starting up (core dumps disabled) Nov 14 21:54:36 ns1 dovecot: auth-worker(default): mysql: Connected to localhost (mailserver) Nov 14 21:54:44 ns1 postfix/postfix-script[7753]: refreshing the Postfix mail system Nov 14 21:54:44 ns1 postfix/master[1670]: reload -- version 2.7.0, configuration /etc/postfix Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: fatal: mysql:/etc/postfix/mysql-virtual-alias-maps.cf(0,lock|fold_fix): table lookup problem Nov 14 21:54:53 ns1 postfix/master[1670]: warning: process /usr/lib/postfix/trivial-rewrite pid 7759 exit status 1 Nov 14 21:54:53 ns1 postfix/cleanup[7397]: warning: problem talking to service rewrite: Connection reset by peer Nov 14 21:54:53 ns1 postfix/master[1670]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling Nov 14 21:54:53 ns1 postfix/smtpd[7071]: warning: problem talking to service rewrite: Success I tried netstat -ln | grep mysql and it returns unix 2 [ ACC ] STREAM LISTENING 5817 /var/run/mysqld/mysqld.sock. The content of /etc/postfix/mysql-virtual-alias-maps.cf file is here: user = stevejobs password = apple hosts = localhost dbname = mailserver query = SELECT destination FROM virtual_aliases WHERE source='%s' Here I tried to change hosts = 127.0.0.1 but it says warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) So, I am lost and don't know where else to change in order to solve the problem. Any help would be appreciated highly. Thank you.

    Read the article

  • unable to load nvidia(bumblebee) in ubuntu 14.04 (only nouveau loads)

    - by Ubuntuser
    Bumblebee stopped working on my system after upgrading to stable version of Ubuntu 14.04. DUring installation I get this error rmmod: ERROR: Module nouveau is in use Setting up bumblebee (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nouveau . bumblebeed start/running, process 11133 Processing triggers for initramfs-tools (0.103ubuntu4.1) ... update-initramfs: Generating /boot/initrd.img-3.14.1-031401-generic Setting up bumblebee-nvidia (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nvidia rmmod: ERROR: Module nouveau is in use bumblebeed start/running, process 18284 It says nouveau is in use. I checked the loaded modules lsmod | grep nouveau nouveau 1097199 1 mxm_wmi 13021 1 nouveau ttm 85115 1 nouveau i2c_algo_bit 13413 2 i915,nouveau drm_kms_helper 52758 2 i915,nouveau drm 302817 7 ttm,i915,drm_kms_helper,nouveau wmi 19177 3 dell_wmi,mxm_wmi,nouveau video 19476 2 i915,nouveau However, I have nouveau in my blacklist cat /etc/modprobe.d/blacklist.conf | grep nouveau blacklist nouveau blacklist lbm-nouveau alias nouveau off alias lbm-nouveau off My grub is also set to nomodeset cat /etc/default/grub | grep nomodeset GRUB_CMDLINE_LINUX_DEFAULT="nomodeset quiet splash" My graphics card is nvidia optimus lspci | grep -i vga 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 01:00.0 VGA compatible controller: NVIDIA Corporation GT218M [GeForce 310M] (rev ff) I've raised a bug in launchpad: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1327598 Note: Nvidia-prime is working for me (partially). Frequent mouse locks. Interestingly, bumblebee works perfectly fine on my fedora 20 partition on this same laptop.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

  • Do any CDN services offer multiple urls (or aliases) for your files?

    - by Jakobud
    Lets say a company has multiple commercial web properties that happen to use a lot of the same images on each site. For SEO reasons, the sites must not appear to be related to eachother in any way. This means that the sites can't all link to the same image, even though they all use the same one. Therefore, an image is uploaded to each site and served from each site separately. In order to improve maintainability and latency, lets say the company wanted to use a CDN service. What I'm wondering is, if you upload a file, like an image or something, to a CDN, is there basically one single URL that you access that image at? Or do some (or all) CDN services offer alias URLs so that you can access the same resource from multiple URLs? Example of undesirable situation: Both sites link to the same file URL Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Example of DESIRABLE situation: Both sites link to the same file via different URLs Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-alias-path/myimage.jpg"/> So in the end, there is only one single file, myimage.jpg on the CDN server, but it is accessible from multiple URLs. Is this possible with CDN services? I know this would make browsers cache the same image twice, but at least it would be better for maintainability. Only one file would ever have to be uploaded.

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • bash-like features in sqlplus, rman and other Oracle command line tools

    - by Gilles Haro
    As far as I can remember, I have always been complaining about the lack of “recall last command” from within sqlplus. Such a basic thing, available in any bash shell or windows cmd terminal, remains missing with Oracle command lines tools. Thanks to davidw who published a post in the french blog EASYTEAM, it is now possible to use a simple rpm package rlwrap to enhance sqlplus, dgmgrl, rman, … tools and give them bash “recall/completion” capabilities. I installed it in a few minutes and I am already wondering how can people work without it. The steps are here : Get the rpm file from sites like RPM PBone. AS root, install the package rpm -ivh rlwrap-0.37-1.el5.x86_64.rpm As Oracle, create a dictionnary file (for autocompletion) . This file is made of a series of words to be used for autocompletion. Put in it the list of dictionary tables, the list of sql commands, the list of sqlplus commands… whatever your like. And use the <tab> key as you would in a bash shell. $HOME/.oracle_keywords Create an alias for sqlplus alias sqlplus='/usr/bin/rlwrap -if $HOME/.oracle_keywords $ORACLE_HOME/bin/sqlplus' And enjoy it !!! Thank you DavidW. Gilles Haro Technical Expert - Core Technology, Oracle Consulting  Technorati Tags: rlwrap bash sqlplus

    Read the article

  • Internet Timeouts with TP-Link TL-WN821N v2 wireless usb stick

    - by user1622959
    A short time after accessing the internet, the browser/download times out. Before the timeout, the internet works OK briefly; afterwards, the wireless is still connected with a strong signal, but every internet access results in a timeout. When I leave the PC for a while, the internet is back just to timeout again as soon as I start using it. The same happens when I reconnect to the router. Also, when I surf the internet, it takes a couple of minutes until the timeout, but when I download something, it times out in a matter of seconds. The wireless adapter works just fine in Windows and internet via ethernet cable works just fine in Ubuntu. Does anyone have the same problem or knows a solution. I use Ubuntu 12.10 x64. The problem occurs since I installed ubuntu (which was a few days ago). Here some stuff that might be usefull: serus@serus-Ubuntu-PC:~$ lsusb Bus 002 Device 002: ID 0cf3:1002 Atheros Communications, Inc. TP-Link TL-WN821N v2 802.11n [Atheros AR9170] serus@serus-Ubuntu-PC:~$ lsmod Module Size Used by carl9170 82083 0 serus@serus-Ubuntu-PC:~$ modinfo carl9170 filename: /lib/modules/3.5.0-21- generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko alias: arusb_lnx alias: ar9170usb firmware: carl9170-1.fw description: Atheros AR9170 802.11n USB wireless serus@serus-Ubuntu-PC:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"virginmedia0137463" Mode:Managed Frequency:2.462 GHz Access Point: A0:21:B7:F8:29:B6 Bit Rate=240 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1399 Invalid misc:18 Missed beacon:0 serus@serus-Ubuntu-PC:~$ sudo lshw -C network *-network description: Wireless interface physical id: 1 bus info: usb@2:2 logical name: wlan0 serial: 00:27:19:bb:00:19 capabilities: ethernet physical wireless configuration: broadcast=yes driver=carl9170 driverversion=3.5.0-21-generic firmware=1.9.4 ip=192.168.0.6 link=yes multicast=yes wireless=IEEE 802.11bgn

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

  • Server 2003 SP2 BSOD caused by fltmgr.sys

    - by MasterMax1313
    I'm running into a problem where a Server 2003 SP2 box has started crashing roughly once an hour, BSODing out with the message that fltmgr.sys is probably the cause. I ran dumpchk.exe on the memory.dmp file, indicating the same thing. Any thoughts on typical root causes? The following is the error code I'm seeing: Error code 0000007e, parameter1 c0000005, parameter2 f723e087, parameter3 f78cea8c, parameter4 f78ce788. After running dumpchk on the memory.dmp file, I get the following note: Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) The full log is here: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [c:\windows\memory.dmp] Kernel Complete Dump File: Full address space is available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name: Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Loading Kernel Symbols ............................................................... ................................................. Loading User Symbols Loading unloaded module list ... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. *** ERROR: Symbol file could not be found. Defaulted to export symbols for fltmgr.sys - --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- ----- 32 bit Kernel Full Dump Analysis DUMP_HEADER32: MajorVersion 0000000f MinorVersion 00000ece KdSecondaryVersion 00000000 DirectoryTableBase 004e7000 PfnDataBase 81600000 PsLoadedModuleList 8089ffa8 PsActiveProcessHead 808a61c8 MachineImageType 0000014c NumberProcessors 00000001 BugCheckCode 0000007e BugCheckParameter1 c0000005 BugCheckParameter2 f723e087 BugCheckParameter3 f78dea8c BugCheckParameter4 f78de788 PaeEnabled 00000001 KdDebuggerDataBlock 8088e3e0 SecondaryDataState 00000000 ProductType 00000003 SuiteMask 00000110 Physical Memory Description: Number of runs: 3 (limited to 3) FileOffset Start Address Length 00001000 0000000000001000 0009e000 0009f000 0000000000100000 bfdf0000 bfe8f000 00000000bff00000 00100000 Last Page: 00000000bff8e000 00000000bffff000 KiProcessorBlock at 8089f300 1 KiProcessorBlock entries: ffdff120 Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name:*** ERROR: Module load completed but symbols could not be loaded for srv.sys Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 start end module name 80800000 80a50000 nt Tue Oct 19 10:00:49 2010 (4CBDA491) 80a50000 80a6f000 hal Sat Feb 17 00:48:25 2007 (45D69729) b83d4000 b83fe000 Fastfat Sat Feb 17 01:27:55 2007 (45D6A06B) b8476000 b84a1000 RDPWD Sat Feb 17 00:44:38 2007 (45D69646) b8549000 b8554000 TDTCP Sat Feb 17 00:44:32 2007 (45D69640) b8fe1000 b9045000 srv Thu Feb 17 11:58:17 2011 (4D5D53A9) b956d000 b95be000 HTTP Fri Nov 06 07:51:22 2009 (4AF41BCA) b9816000 b982d780 hgfs Tue Aug 12 20:36:54 2008 (48A22CA6) b9b16000 b9b20000 ndisuio Sat Feb 17 00:58:25 2007 (45D69981) b9cf6000 b9d1ac60 iwfsd Wed Sep 29 01:43:59 2004 (415A4B9F) b9e5b000 b9e62000 parvdm Tue Mar 25 03:03:49 2003 (3E7FFF55) b9e63000 b9e67860 lgtosync Fri Sep 12 04:38:13 2003 (3F6185F5) b9ed3000 b9ee8000 Cdfs Sat Feb 17 01:27:08 2007 (45D6A03C) b9f10000 b9f2e000 EraserUtilRebootDrv Thu Jul 07 21:45:11 2011 (4E166127) b9f2e000 b9f8c000 eeCtrl Thu Jul 07 21:45:11 2011 (4E166127) b9f8c000 b9f9d000 Fips Sat Feb 17 01:26:33 2007 (45D6A019) b9f9d000 ba013000 mrxsmb Fri Feb 18 10:22:23 2011 (4D5E8EAF) ba013000 ba043000 rdbss Wed Feb 24 10:54:03 2010 (4B854B9B) ba043000 ba0ad000 SPBBCDrv Mon Dec 14 23:39:00 2009 (4B2712E4) ba0ad000 ba0d7000 afd Thu Feb 10 08:42:18 2011 (4D53EB3A) ba0d7000 ba108000 netbt Sat Feb 17 01:28:57 2007 (45D6A0A9) ba108000 ba19c000 tcpip Sat Aug 15 05:53:38 2009 (4A8685A2) ba19c000 ba1b5000 ipsec Sat Feb 17 01:29:28 2007 (45D6A0C8) ba275000 ba288600 NAVENG Fri Jul 29 08:10:02 2011 (4E32A31A) ba289000 ba2ae000 SYMEVENT Thu Apr 15 21:31:23 2010 (4BC7BDEB) ba2ae000 ba42d300 NAVEX15 Fri Jul 29 08:07:28 2011 (4E32A280) ba42e000 ba479000 SRTSP Fri Mar 04 15:31:08 2011 (4D714C0C) ba485000 ba487b00 dump_vmscsi Wed Apr 11 13:55:32 2007 (461D2114) ba4e1000 ba540000 update Mon May 28 08:15:16 2007 (465AC7D4) ba568000 ba59f000 rdpdr Sat Feb 17 00:51:00 2007 (45D697C4) ba59f000 ba5b1000 raspptp Sat Feb 17 01:29:20 2007 (45D6A0C0) ba5b1000 ba5ca000 ndiswan Sat Feb 17 01:29:22 2007 (45D6A0C2) ba5da000 ba5e4000 dump_diskdump Sat Feb 17 01:07:44 2007 (45D69BB0) ba66a000 ba67e000 rasl2tp Sat Feb 17 01:29:02 2007 (45D6A0AE) ba67e000 ba69a000 VIDEOPRT Sat Feb 17 01:10:30 2007 (45D69C56) ba69a000 ba6c1000 ks Sat Feb 17 01:30:40 2007 (45D6A110) ba6c1000 ba6d5000 redbook Sat Feb 17 01:07:26 2007 (45D69B9E) ba6d5000 ba6ea000 cdrom Sat Feb 17 01:07:48 2007 (45D69BB4) ba6ea000 ba6ff000 serial Sat Feb 17 01:06:46 2007 (45D69B76) ba6ff000 ba717000 parport Sat Feb 17 01:06:42 2007 (45D69B72) ba717000 ba72a000 i8042prt Sat Feb 17 01:30:40 2007 (45D6A110) baff0000 baff3700 CmBatt Sat Feb 17 00:58:51 2007 (45D6999B) bf800000 bf9d3000 win32k Thu Mar 03 08:55:02 2011 (4D6F9DB6) bf9d3000 bf9ea000 dxg Sat Feb 17 01:14:39 2007 (45D69D4F) bf9ea000 bf9fec80 vmx_fb Sat Aug 16 07:23:10 2008 (48A6B89E) bf9ff000 bfa4a000 ATMFD Tue Feb 15 08:19:22 2011 (4D5A7D5A) bff60000 bff7e000 RDPDD Sat Feb 17 09:01:19 2007 (45D70AAF) f7214000 f723a000 KSecDD Mon Jun 15 13:45:11 2009 (4A3688A7) f723a000 f725f000 fltmgr Sat Feb 17 00:51:08 2007 (45D697CC) f725f000 f7272000 CLASSPNP Sat Feb 17 01:28:16 2007 (45D6A080) f7272000 f7283000 symmpi Mon Dec 13 16:03:14 2004 (41BE0392) f7283000 f72a2000 SCSIPORT Sat Feb 17 01:28:41 2007 (45D6A099) f72a2000 f72bf000 atapi Sat Feb 17 01:07:34 2007 (45D69BA6) f72bf000 f72e9000 volsnap Sat Feb 17 01:08:23 2007 (45D69BD7) f72e9000 f7315000 dmio Sat Feb 17 01:10:44 2007 (45D69C64) f7315000 f733c000 ftdisk Sat Feb 17 01:08:05 2007 (45D69BC5) f733c000 f7352000 pci Sat Feb 17 00:59:03 2007 (45D699A7) f7352000 f7386000 ACPI Sat Feb 17 00:58:47 2007 (45D69997) f7487000 f7490000 WMILIB Tue Mar 25 03:13:00 2003 (3E80017C) f7497000 f74a6000 isapnp Sat Feb 17 00:58:57 2007 (45D699A1) f74a7000 f74b4000 PCIIDEX Sat Feb 17 01:07:32 2007 (45D69BA4) f74b7000 f74c7000 MountMgr Sat Feb 17 01:05:35 2007 (45D69B2F) f74c7000 f74d2000 PartMgr Sat Feb 17 01:29:25 2007 (45D6A0C5) f74d7000 f74e7000 disk Sat Feb 17 01:07:51 2007 (45D69BB7) f74e7000 f74f3000 Dfs Sat Feb 17 00:51:17 2007 (45D697D5) f74f7000 f7501000 crcdisk Sat Feb 17 01:09:50 2007 (45D69C2E) f7507000 f7517000 agp440 Sat Feb 17 00:58:53 2007 (45D6999D) f7517000 f7522000 TDI Sat Feb 17 01:01:19 2007 (45D69A2F) f7527000 f7532000 ptilink Sat Feb 17 01:06:38 2007 (45D69B6E) f7537000 f7540000 raspti Sat Feb 17 00:59:23 2007 (45D699BB) f7547000 f7556000 termdd Sat Feb 17 00:44:32 2007 (45D69640) f7557000 f7561000 Dxapi Tue Mar 25 03:06:01 2003 (3E7FFFD9) f7577000 f7580000 mssmbios Sat Feb 17 00:59:12 2007 (45D699B0) f7587000 f7595000 NDProxy Wed Nov 03 09:25:59 2010 (4CD162E7) f75a7000 f75b1000 flpydisk Tue Mar 25 03:04:32 2003 (3E7FFF80) f75b7000 f75c0080 SRTSPX Fri Mar 04 15:31:24 2011 (4D714C1C) f75d7000 f75e3000 vga Sat Feb 17 01:10:30 2007 (45D69C56) f75e7000 f75f2000 Msfs Sat Feb 17 00:50:33 2007 (45D697A9) f75f7000 f7604000 Npfs Sat Feb 17 00:50:36 2007 (45D697AC) f7607000 f7615000 msgpc Sat Feb 17 00:58:37 2007 (45D6998D) f7617000 f7624000 netbios Sat Feb 17 00:58:29 2007 (45D69985) f7627000 f7634000 wanarp Sat Feb 17 00:59:17 2007 (45D699B5) f7637000 f7646000 intelppm Sat Feb 17 00:48:30 2007 (45D6972E) f7647000 f7652000 kbdclass Sat Feb 17 01:05:39 2007 (45D69B33) f7657000 f7661000 mouclass Tue Mar 25 03:03:09 2003 (3E7FFF2D) f7667000 f7671000 serenum Sat Feb 17 01:06:44 2007 (45D69B74) f7677000 f7682000 fdc Sat Feb 17 01:07:16 2007 (45D69B94) f7687000 f7694b00 vmx_svga Sat Aug 16 07:22:07 2008 (48A6B85F) f7697000 f76a0000 watchdog Sat Feb 17 01:11:45 2007 (45D69CA1) f76a7000 f76b0000 ndistapi Sat Feb 17 00:59:19 2007 (45D699B7) f76b7000 f76c6000 raspppoe Sat Feb 17 00:59:23 2007 (45D699BB) f76c8000 f7707000 NDIS Sat Feb 17 01:28:49 2007 (45D6A0A1) f7707000 f770f000 kdcom Tue Mar 25 03:08:00 2003 (3E800050) f770f000 f7717000 BOOTVID Tue Mar 25 03:07:58 2003 (3E80004E) f7717000 f771e000 intelide Sat Feb 17 01:07:32 2007 (45D69BA4) f771f000 f7726000 dmload Tue Mar 25 03:08:08 2003 (3E800058) f777f000 f7786000 dxgthk Tue Mar 25 03:05:52 2003 (3E7FFFD0) f7787000 f778e000 vmmemctl Tue Aug 12 20:37:25 2008 (48A22CC5) f77cf000 f77d6280 vmxnet Mon Sep 08 21:17:10 2008 (48C5CE96) f77d7000 f77df000 audstub Tue Mar 25 03:09:12 2003 (3E800098) f77ef000 f77f7000 Fs_Rec Tue Mar 25 03:08:36 2003 (3E800074) f77f7000 f77fe000 Null Tue Mar 25 03:03:05 2003 (3E7FFF29) f77ff000 f7806000 Beep Tue Mar 25 03:03:04 2003 (3E7FFF28) f7807000 f780f000 mnmdd Tue Mar 25 03:07:53 2003 (3E800049) f780f000 f7817000 RDPCDD Tue Mar 25 03:03:05 2003 (3E7FFF29) f7817000 f781f000 rasacd Tue Mar 25 03:11:50 2003 (3E800136) f7878000 f7897000 Mup Tue Apr 12 15:05:46 2011 (4DA4A28A) f7897000 f7899980 compbatt Sat Feb 17 00:58:51 2007 (45D6999B) f789b000 f789e900 BATTC Sat Feb 17 00:58:46 2007 (45D69996) f789f000 f78a1b00 vmscsi Wed Apr 11 13:55:32 2007 (461D2114) f79af000 f79b0280 vmmouse Mon Aug 11 07:16:51 2008 (48A01FA3) f79b1000 f79b2280 swenum Sat Feb 17 01:05:56 2007 (45D69B44) f7b4a000 f7bdf000 Ntfs Sat Feb 17 01:27:23 2007 (45D6A04B) Unloaded modules: ba65a000 ba668000 imapi.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 0000E000 ba1c4000 ba1d5000 vpc-8042.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00011000 f77df000 f77e7000 Sfloppy.SYS Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00008000 ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- Finished dump check

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Design advice for avoiding change in several classes

    - by Anders Svensson
    Hi, I'm trying to figure out how to design a small application more elegantly, and make it more resistant to change. Basically it is a sort of project price calculator, and the problem is that there are many parameters that can affect the pricing. I'm trying to avoid cluttering the code with a lot of if-clauses for each parameter, but still I have e.g. if-clauses in two places checking for the value of the size parameter. I have the Head First Design Patterns book, and have tried to find ideas there, but the closest I got was the decorator pattern, which has an example where starbuzz coffee sets prices depending first on condiments added, and then later in an exercise by adding a size parameter (Tall, Grande, Venti). But that didn't seem to help, because adding that parameter still seemed to add if-clause complexity in a lot of places (and this being an exercise they didn't explain that further). What I am trying to avoid is having to change several classes if a parameter were to change or a new parameter added, or at least change in as few places as possible (there's some fancy design principle word for this that I don't rememeber :-)). Here below is the code. Basically it calculates the price for a project that has the tasks "Writing" and "Analysis" with a size parameter and different pricing models. There will be other parameters coming in later too, like "How new is the product?" (New, 1-5 years old, 6-10 years old), etc. Any advice on the best design would be greatly appreciated, whether a "design pattern" or just good object oriented principles that would make it resistant to change (e.g. adding another size, or changing one of the size values, and only have to change in one place rather than in several if-clauses): public class Project { private readonly int _numberOfProducts; protected Size _size; public Task Analysis { get; set; } public Task Writing { get; set; } public Project(int numberOfProducts) { _numberOfProducts = numberOfProducts; _size = GetSize(); Analysis = new AnalysisTask(numberOfProducts, _size); Writing = new WritingTask(numberOfProducts, _size); } private Size GetSize() { if (_numberOfProducts <= 2) return Size.small; if (_numberOfProducts <= 8) return Size.medium; return Size.large; } public double GetPrice() { return Analysis.GetPrice() + Writing.GetPrice(); } } public abstract class Task { protected readonly int _numberOfProducts; protected Size _size; protected double _pricePerHour; protected Dictionary<Size, int> _hours; public abstract int TotalHours { get; } public double Price { get; set; } protected Task(int numberOfProducts, Size size) { _numberOfProducts = numberOfProducts; _size = size; } public double GetPrice() { return _pricePerHour * TotalHours; } } public class AnalysisTask : Task { public AnalysisTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 850; _hours = new Dictionary<Size, int>() { { Size.small, 56 }, { Size.medium, 104 }, { Size.large, 200 } }; } public override int TotalHours { get { return _hours[_size]; } } } public class WritingTask : Task { public WritingTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 650; _hours = new Dictionary<Size, int>() { { Size.small, 125 }, { Size.medium, 100 }, { Size.large, 60 } }; } public override int TotalHours { get { if (_size == Size.small) return _hours[_size] * _numberOfProducts; if (_size == Size.medium) return (_hours[Size.small] * 2) + (_hours[Size.medium] * (_numberOfProducts - 2)); return (_hours[Size.small] * 2) + (_hours[Size.medium] * (8 - 2)) + (_hours[Size.large] * (_numberOfProducts - 8)); } } } public enum Size { small, medium, large } public partial class Form1 : Form { public Form1() { InitializeComponent(); List<int> quantities = new List<int>(); for (int i = 0; i < 100; i++) { quantities.Add(i); } comboBoxNumberOfProducts.DataSource = quantities; } private void comboBoxNumberOfProducts_SelectedIndexChanged(object sender, EventArgs e) { Project project = new Project((int)comboBoxNumberOfProducts.SelectedItem); labelPrice.Text = project.GetPrice().ToString(); labelWriterHours.Text = project.Writing.TotalHours.ToString(); labelAnalysisHours.Text = project.Analysis.TotalHours.ToString(); } } At the end is a simple current calling code in the change event for a combobox that set size... (BTW, I don't like the fact that I have to use several dots to get to the TotalHours at the end here either, as far as I can recall, that violates the "principle of least knowledge" or "the law of demeter", so input on that would be appreciated too, but it's not the main point of the question) Regards, Anders

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • SEO: Nested List vs List, Split Over Divs vs Definition List

    - by Jon P
    From an SEO perspective which, if any, is better: Option 1: Nested lists with h2 tags <ul id="mainpoints"> <li><h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </li> <li><h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> <li><h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> </ul> Option 2: Divs with h2 and lists <div id="mainpoints"> <div> <h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </div> <div> <h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> <div> <h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> </div> Option 3: Definition List <dl id="mainpoints"> <dt>Powerful Analysis</dt> <dd>- Charting and indicators</dd> <dd>- Daily trading signals</dd> <dd>- Company health checks</dd> <dt>World Market Data</dt> [List Items removed for brevity] <dt>Daily Market Data</dt> [List Items removed for brevity] </dl> My instincts tell me that semanticaly the pure list options (1 & 3) are the best and that h2 may be more SEO friendly (1 & 2) which would point to option 1 as being the best option. I do love the lean makeup of the definition list but will I take an SEO hit by losing the h2 tags? Before anyone asks, h2 is not valid markup in a dt tag. Are my instincts right with a nested list being the way to go?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >