Search Results

Search found 50650 results on 2026 pages for 'html select'.

Page 550/2026 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • Redmine on Redhat/CentOS 5 Without using virtual hosts

    - by flyclassic
    I've have followed all the steps to install Redmine on CentOS 5, except for the Apache part: http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 I do not want to configure a virtualhost as we are not using virtual hosts. Can I configure Redmine to run with http://hostname/redmine? Apparently it doesn't work for my case. Redmine was extracted in to the webserver document root /var/www/html/ called /var/www/html/redmine What I did was added a redmine.conf to /etc/httpd/conf.d/ with the following configuration and restarted the server: <Location "/redmine"> Options Indexes ExecCGI FollowSymLinks -MultiViews Order allow,deny Allow from all AllowOverride all PassengerEnabled On RailsBaseURI /var/www/html/redmine RailsEnv production </Location> now i got this error Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: No such file or directory - config/environment.rb Exception class: Errno::ENOENT Application root: /var/www/html Where have I gone wrong?

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

  • Arch linux selecting packages

    - by allenskd
    Hello, I'm currently having problems with my arch linux installation, forgot to select a few packages but in the guide they never mention which key do I need to press to select/deselect the packages in the installation screen. I used to select them but it's been a long time, I don't remember much anymore =/ Or, if there is a way to make pacman recognize the cd as a container of packages that'd be great.

    Read the article

  • With Apache, is it possible to generate a directory listing for a non-folder URL?

    - by William Denniss
    Apache allows you to create a directory list (when configured) if you visit a folder with no index.html. What I want to know is, is it possible to get that same list but at a different URL? I'm already using index.html and want to keep it that way. i.e., this is what I'm looking for: http://example.com/blar/ - loads my index.html page (don't want this to change) http://example.com/blar/directory_list (I want this url to render the apache directory list instead)

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • Static error page served by nginx when my application is down

    - by dreeves
    If my (Rails) application is down or undergoing database maintenance or whatever, I'd like to specify at the nginx level to server a static page. So every URL like http://example.com/* should server a static html file, like /var/www/example/foo.html. Trying to specify that in my nginx config is giving me fits and infinite loops and whatnot. I'm trying things like location / { root /var/www/example; index foo.html; rewrite ^/.+$ foo.html; } How would you get every URL on your domain to serve a single static file?

    Read the article

  • variables used in inner queries

    - by wcpro
    im trying to build a query that has something like this select id, (select top 1 create_date from table2 where table1id = t1.id and status = 'success') [last_success_date], (select count(*) from table2 where table1id = t1.id and create_date > [last_success_date]) [failures_since_success] from table1 t1 as you can see the [last_Success_Date] is not within the scope of the second query, and i was wondering how i could access that value in other queries without having to rerun it?

    Read the article

  • Creating a new project for Team Foundation Server Basic.

    - by Enrique Lima
    We have installed and configured TFS, we have connected to it using Visual Studio.  Now it is time to get a project created. From Team Explorer, we will right click on the servername\Collection item in the tree to select New Team Project. Once selected, this will open the New Team Project dialog.  Provide a name, then click Next.   The next step is to select a Project Template.  By default you will have 2 available (but there are many downloadable options).  It is important to understand what the templates bring and what options we will live with in the Lifecycle Management option we select. Once selected, click Next. Now we are at the point to specify where our code will be collected, Source Code settings part of the wizard.  Since we are starting new, we will select an empty folder. Click Next. Next we get a Summary view of the options selected. Click Finish. Once the template is downloaded, applied and our choices processed, we have completed the project creation.   This should be our final product …

    Read the article

  • Userful Resources on ADF/JSF/JDEVELOPER

    - by vijaykumar.yenne
    In most of my interactions with the partner developer community who are working on either Webcenter Projects or ADF related Projects, there are constant questions that come up on the documentation or samples or step by step instructions for novices. Though most of the resources are available online on the OTN site, there seems to be a difficulty in getting hold of the right resource for their job to be done, which i am yet to solve. However here is a list of resources that you should have been to if you in the oracle world and building rich internet based applications using JSF/ADF. 1. If you have just started with JDeveloper and wanted to the different nuances of ADF developement and want to deepen your knowledge you should definitely go through these tutorials: http://www.oracle.com/technology/products/jdev/11/cuecards111/index.html 2. Everything about JDEV - includes the IDE download, demos, sample code, best practices etc. http://www.oracle.com/technology/products/jdev/index.html 3. All About ADF: http://www.oracle.com/technology/products/adf/index.html 4. Know more about ADF Faces : http://www.oracle.com/technology/products/adf/index.html 5. If you want to deepen your knowledge here is the aggregate list of all the blogs by our internal development teams and experts from around the globe. This is really an interesting feed especially when you want to do a deep dive on various aspects and want to be an expert in the oracle UI world. http://www.connotea.org/user/jdeveloper Last but not the least, you should always leverage the entire community whenever you run into any issues : http://forums.oracle.com

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

  • How to Transpose Rows and Columns in Excel 2013

    - by Lori Kaufman
    You’ve setup your worksheet with all your row and column headings and you’ve entered all your data. Then, you discover that it would look better if the rows were the columns and the columns were the rows. How do you accomplish this easily? There is an easy way to convert your rows to columns and vice versa using the Transpose feature in Excel. We’ll show you how. Select the cells containing the headings and data you want to transpose. Click Copy or press Ctrl + C. Click in a blank cell on the spreadsheet. This cell will be the top, left corner of the new table of data. Click the down arrow on the Paste button and select Paste Special from the drop-down menu. On the Paste Special dialog box, select the Transpose check box so there is a check mark in the box and click OK. The rows become the columns and the columns become the rows. The original set of data still exists. You can select those cells and delete the headings and data, if desired. Isn’t that a lot easier and faster than retyping all your data?     

    Read the article

  • Nginx try_files or else continue matching against locations?

    - by Yang
    I'm wondering whether this is possible with Nginx: I just added a directory with a bunch of HTML files (foo.html, bar.html) that I'd like to serve with /foo, /bar, etc. If the URL doesn't match up with a file name I'd like to fall back to whatever the next best matching location would be. So I have: # This block is newly added. location ~ ^/([^/]+)$ { default_type text/html; alias /blah/$1.html; } # Our long list of existing subsystems below.... location /subscribe { proxy_pass http://127.0.0.1:5000; } location /upload { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; } location ~ /(data|garbage|blargh).* { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; auth_basic text; auth_basic_user_file /etc/nginx/htpasswd; } .... The problem is that the first regex now eats up the URLs that would've gone to other locations, as per the documented behavior of location. One approach is to maintain the full explicit list of files in the first location block, but this list is quite large and is always changing. Is there a way to check to see if the file exists first, and if not, then continue with what would've been the next-best location match? I took stabs using try_files (including using a @fallback and nesting locations in there) but I don't think it's capable of doing this. However I thought I'd ask here in case I'm missing something. (Or maybe there's another better approach altogether.)

    Read the article

  • OpenWorld on your iPad and iPhone

    - by KLaker
    Most of you probably know that each year I publish a data warehouse guide for OpenWorld which contains links to the latest data warehouse videos, a calendar for the most important sessions and labs and a section that provides profiles and relevant links for all the most important data warehouse presenters. For this year’s conference made all this information available in an HTML app that runs on most smartphones and tablets. The pictures below show the HTML app running on iPad and iPhone. This exciting new web app contains information about why you should attend OpenWorld - just in case you have not yet booked your ticket! - as well as the following information: Getting to know 12c - a series of video interviews with George Lumpkin, Vice President of Data Warehouse Product Management Your presenters - full biographies and links to social media sites for all the key data warehouse presenters Must sees sessions - list of all the most important data warehouse presentations at this year’s conference Our customers - profiles our most important data warehouse customers Must attend labs - list of all the most important data warehouse hands-on labs at this year’s conference Links - a list of links to the most important data warehouse sites If you want to run these web apps on your smartphone and/or tablet then follow these links: iPhone - https://876d5e65b7768ca57d1fd1236578c9374b1fca87.googledrive.com/host/0Bz-zGlWahRf4OXNzejBiRFV5ZXc/iPhone-DWoow2014.html iPad - https://876d5e65b7768ca57d1fd1236578c9374b1fca87.googledrive.com/host/0Bz-zGlWahRf4OXNzejBiRFV5ZXc/iPad-DW2014.html Android users: I have tested the app on Android and there appears to be a bug in the way the Chrome browser displays iframes because scrolling does not work . The app does work correctly if you use either the Android version of the Opera browser or the standard Samsung browser. If you have any comments about the app (content you would like to see) then please let me know. Enjoy OpenWorld.

    Read the article

  • Nginx - Enable PHP for all hosts

    - by F21
    I am currently testing out nginx and have set up some virtual hosts by putting configurations for each virtual host in its own file in a folder called sites-enabled. I then ask nginx to load all those config files using: include C:/nginx/sites-enabled/*.conf; This is my current config: http { server_names_hash_bucket_size 64; include mime.types; include C:/nginx/sites-enabled/*.conf; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; root C:/www-root; #charset koi8-r; #access_log logs/host.access.log main; location / { index index.html index.htm index.php; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server{ server_name localhost; } } And this is one of the configs for a virtual host: server { server_name testsubdomain.testdomain.com root C:/www-root/testsubdomain.testdomain.com; } The problem is that for testsubdomain.testdomain.com, I cannot get php scripts to run unless I have defined a location block with fastcgi parameters for it. What I would like to do is to be able to enable PHP for all hosted sites on this server (without having to add a PHP location block with fastcgi parameters) for maintainability. This is so that if I need to change any fastcgi values for PHP, I can just change it in 1 location. Is this something that's possible for nginx? If so, how can this be done?

    Read the article

  • application/x-httpd-php opening rather than pages?

    - by GuyNoir
    For some reason, no matter what page I try to open (.php, .html, .html, etc) my server trys to save it, rather than displaying the page. However, I do get the "It Works!" page for the main domain, it's only when I try to display a page on a sub-directory that it fails. I researched the problem, and I placed a .htaccess file in the directory of the pages that I'm attempting open with the code: AddType application/x-httpd-php5 .php .html .htm AddHandler applicaton/x-httpd-php5 .php .htm .html .my I'm running the latest version of apache and php5 which is installed as the "php5" module. Any help? http://i.stack.imgur.com/fh6dj.png

    Read the article

  • Mounted HDD not having enough permissions from Apache/PHP

    - by Dan
    Piwigo gallery, on apache and php. The root system is a RAID 128GB. /var/www/html is on the root file system. Mounted the 320GB hdd to /var/www/html/320 using defaults, it's an ext4 fs. Put a symlink to it in /var/www/html/galleries which is read by the gallery script so I can upload images to there, then click sync. It gives me the error: [./galleries/] PWG-ERROR-NO-FS (File/directory read error) PWG-ERROR-NO-FS: The file or directory cannot be accessed (either it does not exist or the access is denied) chmod 777 set on /dev/sdb1, /var/www/html, and /var/www/html/320 as well as the symlink galleries too. All recursive. chown apache:apache to everything too. PHP just can't read/write to it. I tried with and without the symlink, I've tried everything I can think of. Nothing. Any ideas how I can give apache/php permission to read/write to this drive? With 777 permissions all around it should already be able to.

    Read the article

  • Weird CSS-behaviour [migrated]

    - by WMRKameleon
    <!DOCTYPE html> <html> <head> <title>PakHet</title> <link rel="stylesheet" type="text/css" href="css/basis.css" /> </head> <body> <div class="wrapper"> <div id='cssmenu'> <ul> <li class="active"><a href='index.html'><span>Start</span></a></li> <li><a href='pakhet.html'><span>Over PakHet</span></a></li> <li><a href='overons.html'><span>Over Ons</span></a></li> <li class='has-sub '><a href='#'><span>Uw pakket</span></a> <ul> <li><a href='aanmelden.php'><span>Aanmelden</span></a></li> <li><a href='traceren.php'><span>Traceren</span></a></li> </ul> </li> </ul> </div> <div class="header"> <h1>Hier komt de titel van de website</h1> </div> <div class="content"> <p>Dit is de tekst van de content. Dit is de indexpagina.</p> </div> </div> </body> </html> And this is the CSS: /* CSS RESET */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video, *{ margin: 0; padding: 0; border: 0; vertical-align: baseline; } table { border-collapse: collapse; border-spacing: 0; } /* Einde CSS RESET, nu echte code */ html, body{ background:url(../images/bg_picture.jpg) fixed no-repeat; } .wrapper{ margin:0 auto; } .header{ margin:0 auto; background-color:rgba(0, 0, 0, 0.4); } .content{ background-color:rgba(0, 0, 0, 0.4); width:600px; margin:0 auto; margin-top:50px; } .content p{ color:white; text-shadow:1px 1px 0 rgba(0,0,0, 0.5); font-family:"Lucida Grande", sans-serif; } #cssmenu{ height:37px; display:block; padding:0; margin: 0; border:1px solid; } #cssmenu > ul {list-style:inside none; padding:0; margin:0;} #cssmenu > ul > li {list-style:inside none; padding:0; margin:0; float:left; display:block; position:relative;} #cssmenu > ul > li > a{ outline:none; display:block; position:relative; padding:12px 20px; font:bold 13px/100% "Lucida Grande", Helvetica, sans-serif; text-align:center; text-decoration:none; text-shadow:1px 1px 0 rgba(0,0,0, 0.4); } #cssmenu > ul > li:first-child > a{border-radius:5px 0 0 5px;} #cssmenu > ul > li > a:after{ content:''; position:absolute; border-right:1px solid; top:-1px; bottom:-1px; right:-2px; z-index:99; } #cssmenu ul li.has-sub:hover > a:after{top:0; bottom:0;} #cssmenu > ul > li.has-sub > a:before{ content:''; position:absolute; top:18px; right:6px; border:5px solid transparent; border-top:5px solid #fff; } #cssmenu > ul > li.has-sub:hover > a:before{top:19px;} #cssmenu ul li.has-sub:hover > a{ background:#3f3f3f; border-color:#3f3f3f; padding-bottom:13px; padding-top:13px; top:-1px; z-index:999; } #cssmenu ul li.has-sub:hover > ul, #cssmenu ul li.has-sub:hover > div{display:block;} #cssmenu ul li.has-sub > a:hover{background:#3f3f3f; border-color:#3f3f3f;} #cssmenu ul li > ul, #cssmenu ul li > div{ display:none; width:auto; position:absolute; top:38px; padding:10px 0; background:#3f3f3f; border-radius:0 0 5px 5px; z-index:999; } #cssmenu ul li > ul{width:200px;} #cssmenu ul li > ul li{display:block; list-style:inside none; padding:0; margin:0; position:relative;} #cssmenu ul li > ul li a{ outline:none; display:block; position:relative; margin:0; padding:8px 20px; font:10pt "Lucida Grande", Helvetica, sans-serif; color:#fff; text-decoration:none; text-shadow:1px 1px 0 rgba(0,0,0, 0.5); } #cssmenu, #cssmenu > ul > li > ul > li a:hover{ background:#333333; background:-moz-linear-gradient(top, #333333 0%, #222222 100%); background:-webkit-gradient(linear, left top, left bottom, color-stop(0%,#333333), color-stop(100%,#222222)); background:-webkit-linear-gradient(top, #333333 0%,#222222 100%); background:-o-linear-gradient(top, #333333 0%,#222222 100%); background:-ms-linear-gradient(top, #333333 0%,#222222 100%); background:linear-gradient(top, #333333 0%,#222222 100%); filter:progid:DXImageTransform.Microsoft.gradient( startColorstr='#333333', endColorstr='#222222',GradientType=0 ); } #cssmenu{border-color:#000;} #cssmenu > ul > li > a{border-right:1px solid #000; color:#fff;} #cssmenu > ul > li > a:after{border-color:#444;} #cssmenu > ul > li > a:hover{background:#111;} #cssmenu > ul > li.active > a{ color:orange; } .header{ clear:both; } The problem is that, whenever I hover on the dropdown-menu, that a 1px margin appears in between the menu and the header. Can I solve that? I can't seem to find the solution.

    Read the article

  • Hex Dump using LINQ (in 7 lines of code)

    - by Fabrice Marguerie
    Eric White has posted an interesting LINQ query on his blog that shows how to create a Hex Dump in something like 7 lines of code.Of course, this is not production grade code, but it's another good example that demonstrates the expressiveness of LINQ.Here is the code:byte[] ba = File.ReadAllBytes("test.xml");int bytesPerLine = 16;string hexDump = ba.Select((c, i) => new { Char = c, Chunk = i / bytesPerLine })    .GroupBy(c => c.Chunk)    .Select(g => g.Select(c => String.Format("{0:X2} ", c.Char))        .Aggregate((s, i) => s + i))    .Select((s, i) => String.Format("{0:d6}: {1}", i * bytesPerLine, s))    .Aggregate("", (s, i) => s + i + Environment.NewLine);Console.WriteLine(hexDump); Here is a sample output:000000: FF FE 3C 00 3F 00 78 00 6D 00 6C 00 20 00 76 00000016: 65 00 72 00 73 00 69 00 6F 00 6E 00 3D 00 22 00000032: 31 00 2E 00 30 00 22 00 20 00 65 00 6E 00 63 00000048: 6F 00 64 00 69 00 6E 00 67 00 3D 00 22 00 75 00000064: 3E 00Eric White reports that he typically notices that declarative code is only 20% as long as imperative code. Cross-posted from http://linqinaction.net

    Read the article

  • No input file specified on nginx and php-cgi

    - by Sandeep Bansal
    I'm trying to run nginx and php-cgi on my Windows PC, I've got up to the point where I want to move the html directory back two directory's so I can sort of create a structure. The only problem I have now is that PHP doesn't pick up any .php file. I have tried loading a static html file (localhost/test.html) and it works fine but localhost/info.php doesn't work at all. Can anyone give me some guidance on this? The part of the server block can be found below. server { listen 80; server_name localhost; root ../../www; index index.html index.htm index.php; #charset koi8-r; #access_log logs/host.access.log main; location ~ \.php$ { fastcgi_pass 127.0.0.1:9123; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Thanks

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Do parent website application pools serve child application pools as well?

    - by Mike G
    I am running a .NET web application in its own application pool on IIS7. The parent website is set to run in its own application pool. Today we noticed a huge number of connections going to IIS. I tried to browse a plain ol' .html page in the directory of the web application and it hangs. I then try to browse another plain .html file in the root of the parent website, and it too hangs. In performance monitor, i see there are some 8k connections to the default website and climbing. I cant seem to understand if my application was the problem, or IIS itself. If it was my application, wouldnt the html page in the root of the parent website still be able to be served? edit: Also, if i shut down the app pool to my application, the html page on the root of the parent website is still not able to be displayed.

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >