Search Results

Search found 10698 results on 428 pages for 'inline functions'.

Page 380/428 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Why can't I use SSL certs imported via Server Admin in a custom Apache install?

    - by morgant
    I've got a couple of Mac OS X 10.6.8 Server web servers that run a custom AMP255 (Apache 2.x, MySQL 5.x, and PHP 5.x) stack installed using MacPorts. We've got a lot of Mac OS X Server servers and generally install SSL certs via Server Admin and they "just work" in the built-in services, however, these web servers have always had SSL certs installed in a non-standard location and used only for Apache. Long story short, we're trying to standardize this part of our administration and install certs via Server Admin, but have run into the following issue: when the certs are installed via Server Admin and referenced in our Apache conf files, Apache then prompts for a password upon trying to start. It does not seem to be any password we know, certainly not the admin or keychain passwords! We've added the _www user to the certusers (mainly just to ensure they have the proper access to the private key in /etc/certificates/). So, with the custom installed certs we have the following files (basically just pasted in from the company we purchase our certs from): -rw-r--r-- 1 root admin 1395 Apr 10 11:22 *.domain.tld.ca -rw-r--r-- 1 root admin 1656 Apr 10 11:21 *.domain.tld.cert -rw-r--r-- 1 root admin 1680 Apr 10 11:22 *.domain.tld.key And the following in the VirtualHost in /opt/local/apache2/conf/extra/httpd-ssl.conf: SSLCertificateFile /path/to/certs/*.domain.tld.cert SSLCertificateKeyFile /path/to/certs/*.domain.tld.key SSLCACertificateFile /path/to/certs/*.domain.tld.ca This setup functions normally. If we use the certs installed via Server Admin, which both Server Admin & Keychain Assistant show as valid, they're installed in /etc/certificates/ as follows: -rw-r--r-- 1 root wheel 1655 Apr 9 13:44 *.domain.tld.SOMELONGHASH.cert.pem -rw-r--r-- 1 root wheel 4266 Apr 9 13:44 *.domain.tld.SOMELONGHASH.chain.pem -rw-r----- 1 root certusers 3406 Apr 9 13:44 *.domain.tld.SOMELONGHASH.concat.pem -rw-r----- 1 root certusers 1751 Apr 9 13:44 *.domain.tld.SOMELONGHASH.key.pem And if we replace the aforementioned lines in our httpd-ssl.conf with the following: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem This prompts for the unknown password. I have also tried httpd-ssl.conf configured as follows: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.concat.pem And as: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCACertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem We've verified that the certificate is configured to allow all applications access it (in Keychain Assistant). A diff of the /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem & *.domain.tld.key files shows the former is encrypted and the latter is not, so we're assuming that Server Admin/Keychain Assistant is encrypting them for some reason. I know I can create an unencrypted key file as follows: sudo openssl rsa -in /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem -out /etc/certificates/*.domain.tld.SOMELONGHASH.key.no_password.pem But, I can't do that without entering the password. I thought maybe I could export an unencrypted copy of the key from Keychain Admin, but I'm not seeing such an option (not to mention that the .pem options are greyed out in all export options). Any assistance would be greatly appreciated.

    Read the article

  • Optimal setup for ASUS P6X58D Premium BIOS (no OC)?

    - by rumtscho
    Normally, I'd trust the mainboard manufacturer to choose the best options as defaults. But I had trouble with the board, because even with Quick Boot enabled, it booted twice as slowly as a Pentium 4 Celeron. Then I changed lots of options at once (most of them weren't explained in the manual, just mentioned with a single sentence) and the boot time is only marginally worse than the Pentium 4 (54 sec against 46 sec from button to pw entering screen). Now I don't know if I have turned something off which should have stayed on. I guess I even won't be able to boot from a CD now, because even though it is present in the boot sequence, I took off a timeout I think it needs to check whether there is a disk in the drive. The second reason is that I don't have an internal HDD, only a SSD. I forgot my sources blush but I am under the impression that today's BIOS and OS options are geared toward booting from a HDD, which is often less than optimal when one boots from a SSD, especially when there are functions which cause avoidable writing cycles, as a SSD wears out after too many writing cycles. Most of the things I've read concern the OS, but there are some BIOS-relevant options too. I am especially confused about the disk mode. The board supports AHCI, IDE-simulation and RAID, but of the different articles I've read, there is a proponent for each and no clear arguments for any. So can one tell me which options are important in general and which are important for a SSD-only system? I don't want to overclock the CPU, so you don't have to say anything about this (yes I know the board is meant for OC:)). I am thinking of overclocking the RAM, since they sold me 1600er heatsinked modules which are running at 1066 now, but I'm not sure yet about that. The rest of the system: i7-930, Intel X25-m G2, 6 GB RAM, GTS 250, some no-name Blue-ray ROM. 2 external HDDs over USB 2.0. Lots of other USB-connected hardware (12 devices I think), no SATA 3 drives (will disabling the controller have an impact on performance?), no LAN, only WiFi. Lucid Lynx 64 bit, no dual boot, no virtual installations. The main uses of the system are: managing and playing/showing all the media stored on the external disks, lots of image manipulation, some video editing, a bit of (non-demanding) gaming, rarely development. Lots of Internet surfing too, but this shouldn't have much impact on performance.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • Explanation of the init.d/scripts Fedora

    - by Shahmir Javaid
    Below is a copy of vsftpd, i need some explanations of some of the scripts mentioned below in this script: #!/bin/bash # ### BEGIN INIT INFO # Provides: vsftpd # Required-Start: $local_fs $network $named $remote_fs $syslog # Required-Stop: $local_fs $network $named $remote_fs $syslog # Short-Description: Very Secure Ftp Daemon # Description: vsftpd is a Very Secure FTP daemon. It was written completely from # scratch ### END INIT INFO # vsftpd This shell script takes care of starting and stopping # standalone vsftpd. # # chkconfig: - 60 50 # description: Vsftpd is a ftp daemon, which is the program \ # that answers incoming ftp service requests. # processname: vsftpd # config: /etc/vsftpd/vsftpd.conf # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network RETVAL=0 prog="vsftpd" start() { # Start daemons. # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 if [ -d /etc/vsftpd ] ; then CONFS=`ls /etc/vsftpd/*.conf 2>/dev/null` [ -z "$CONFS" ] && exit 6 for i in $CONFS; do site=`basename $i .conf` echo -n $"Starting $prog for $site: " daemon /usr/sbin/vsftpd $i RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/$prog break else if [ -f /var/lock/subsys/$prog ]; then RETVAL=0 break fi fi done else RETVAL=1 fi return $RETVAL } stop() { # Stop daemons. echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart|reload) stop start RETVAL=$? ;; condrestart|try-restart|force-reload) if [ -f /var/lock/subsys/$prog ]; then stop start RETVAL=$? fi ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|try-restart|force-reload|status}" exit 1 esac exit $RETVAL Question I What the hell is the difference between the && and || signs in the below commands, and is it just an easy way to do a simple if check or is it completely different to a if[..something..]; then ..something.. fi: # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 Question II i get what -eq and -gt is (equal to, greater than) but is there a simple website that explains what -x, -d and -f are? Any help would be apreciated Running Fedora 12 on my OS. Script copied from /etc/init.d/vsftpd Question III It says required starts are $local_fs $network $named $remote_fs $syslog but i cant see any where it checks for those.

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • How does formatting works with a PowerShell function that returns a set of elements?

    - by Steve B
    If I write this small function : function Foo { Get-Process | % { $_ } } And if I run Foo It displays only a small subset of properties: PS C:\Users\Administrator> foo Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 86 10 1680 412 31 0,02 5916 alg 136 10 2772 2356 78 0,06 3684 atieclxx 123 7 1780 1040 33 0,03 668 atiesrxx ... ... But even if only 8 columns are shown, there are plenty of other properties (as foo | gm is showing). What is causing this function to show only this 8 properties? I'm actually trying to build a similar function that is returning complex objects from a 3rd party .Net library. The library is flatting a 2 level hierarchy of objects : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } } This method is dumping the objects in a list form (one line per property). How can I control what is displayed, keeping the actual object available? I have tried this : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } | format-table Property1, Property2 } It shows in a console the expected table : Property1 Property2 --------- --------- ValA ValD ValB ValE ValC ValF But I lost my objects. Running Get-Member on the result shows : TypeName: Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() autosizeInfo Property Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo autosizeInfo {get;set;} ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} pageFooterEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageFooterEntry pageFooterEntry {get;set;} pageHeaderEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageHeaderEntry pageHeaderEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} TypeName: Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} Instead of showing the 2nd level child object members. In this case, I can't pipe the result to functions waiting for this type of argument. How does Powershell is supposed to handle such scenario?

    Read the article

  • Why did Intel drop the Itanium?

    - by Cole Johnson
    I was reading up on the history of the computer and I came along the IA-64 (Itanium) processors. They sounded really interesting and I was confused as to why Intel would decide to drop them. The ability to choose explicitly what 2 instructions you wanted to run in that cycle is a great idea, especially when writing your program in assembly, for example, a faster bootloader. The hundreds of registers should be convincing for any assembly programmer. You could essentially store all the functions variables in the registers if it doesn't call any other ones. The ability to do instructions like this: (qp) xor r1 = r2, r3 ; r1 = r2 XOR r3 (qp) xor r1 = (imm8), r3 ; r1 = (imm8) XOR r3 versus having to do: ; eax = r1 ; ebx = r2 ; ecx = r3 mov eax, ebx ; first put r2 into r1 xor eax, ecx ; then set r1 equivalent to r2 XOR r3 or ; SAME mov eax, (imm32) ; first put (imm32) into r1 xor eax, ecx ; then set r1 equivalent to (imm32) XOR r3 I heard it was because of no backwards x86 comparability, but couldn't thy be fixed by just adding the Pentium circuitry to it and just add a processor flag that would switch it to Itanium mode (like switching to Protected or Long mode) All the great things about it would have surly put them a giant leap ahead of AMD. Any ideas? Sadly this means you will need a very advanced compiler to do this. Or even one per specific model of the CPU. (E.g. a newer version of the Itanium with an extra feature would require different compiler). When I was working on a WinForms (target only had .NET 2.0) project in Visual Studio 2010, I had a compile target of IA-64. That means that there is a .NET runtime that was able to be compiled for IA-64 and a .NET runtime means Windows. Plus, Hamilton's answer mentions Windows NT. Having a full blown OS like Windows NT means that there is a compiler capable of generating IA-64 machine code.

    Read the article

  • Setting up Virtual Hosts with Apache on Windows 2008 server for multiple sites. Complicated setup, including subversion

    - by Roeland
    I am setting up apache on my windows 2008 server at my home. It will serve 2 functions. Subversion hosting to allow me and some others to manage company documents with version control Local website hosting for web development. Will need to run several websites since I generally work on more then one site at a time. Heres what I have done so far. I set up subversion and apache 2.2 using some walk troughs. I changed the default port to 1337. (im a nerd) Using dyndns.com I created a domain to forward to my home ip which is dynamic. ( company.gotdns.org) I then went into my DNS for my company.com and added a record to point repo.company.com to company.gotdns.org At this point people who need access to my file repository can access by going to repo.company.com/repo which is good so far. My question comes on the next step, setting up virtual hosts with apache. Ideally I would like to have my local website be viewable by some others in the company from their homes. So, say I am working on site1, I would like to have them be able to view this by going site1.roeland.bythepixel.com. At the same time, I would like to have site10.wouter.bythepixel.com go to his local setup for site10. What I have done for this: I went into my DNS for company.com and added a record to point roeland.company.com to company.gotdns.org (which translates to my ip). I added code to my httpd-vhosts.conf (listed at bottom) I added code to my host file (listed at bottom) Hah, so of course this doenst work as excepted.. going to site1.roeland.bythepixel.com doesnt bring up my test1 site. Could anyone point me where I may be going wrong? Thanks! hosts: 127.0.0.1 localhost 127.0.0.1 sensenich.roeland.bythepixel.com ::1 localhost httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "F:/Current Projects/sensenich.com" ServerName sensenich.roeland.bythepixel.com ErrorLog "logs/sensenich.roeland.bythepixel.com-error.log" CustomLog "logs/sensenich.roeland.bythepixel.com-access.log" common </VirtualHost>

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • Why does this service refuse to start on Windows server 2003?

    - by PenguinCoder
    We have a Windows 2003 server with Cebos MQ1 (ver. 7 and ver. GRI) products installed that have been operational for years. After installing Microsoft 2010 C++ Redistributable package needed for other development, the MQ1 GRI service now fails to start. Event logs showed that two additional updates (.NET4 and the 2010 C++ Redistributable SP2) where installed by the redistributable as well. As soon as we discovered the MQ1 service was not starting properly, we removed these three installed packages. However the service still does not start; the dialog that pops up states 'The service started then stopped. '. Event logs when we attempt to start the service show nothing; IE: No errors, crashes, failures, or other information related to this service. Executing the MQ1Serv.exe directly specifies an issue of 'Missing command line operation, must specify install, uninstall and company abbreviation.' sc query MQ1Service(GRI) shows a clean exit for the Win32ExitCode of 0x0. Attempting to reinstall the client or server software gives an error of 'The procedure entry point ReInitializeCriticalSection could not be located in the dynamic link library KERNEL32.dll.' at the 'Registering Libraries' stage. At this point, further research has stated that the required function is in URL.dll and to verify the library is not corrupted. Running an sfc /scannow on the server has replaced a few DLLS; including the URL.DLL to versions from 2005. This actually broke other applications which required a reinstall (one of them being IE 7). After reinstall and updates, url.dll version is 7.0.5730.13 (2009) and Kernel32.dll is version 5.2.3790.4480 (2009). The MQ1 GRI service still will not start, specifying the same error as previous 'Service started then stopped'. Running a disassembler on Kernel32.dll and Url.dll show no functions named ReinitializeCriticalSection. Attempting the reinstall of the MQ1 client and server as well as starting the service again, fails once more. However, setting the compatibility mode on the MQ1 client install exe to 'Windows 95' actually gets the program to install. Setting the compatibility mode on the MQ1 server service does not enable it to start. I have been researching this problem for nearly a week and besides the advice to scan and replace url.dll, have come to no successful conclusions. This service was operational prior to the 2010 C++ install, without any additional parameters or settings. After removing the C++ install and all servicepacks/updates it installed silently, still does not correct the issue of the MQ1 GRI service not starting. Q: Has anyone else run into this or similar issue while attempting to get a service initialized? What have I overlooked or what else can I try in order to get this service started??

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • bluetooth connection using pybluez

    - by srj0408
    I am working on bluetooth not exactly on bluetooth stack-development but to use bluetooth in one of my project. I had done all that before using some of the py-bluez commands like hciconfig, hcitool scan , then simple-agents and using serial module inside python. But that was quite random. We were able to connect only one specific device based on its bluetooth address and there was no facility of reconnection once the devices are disconnected. Now i want to try out this stuff in a sequential manner like this (i am doing that all on a RPI and for at present on ubuntu 12.04.) i) Store some names in a file along with some other information with respect to that device. ii) Run a script to find out the device in locality with those names and if any one if found, report that. For this step, i had taken a reference from BTBook , made available from MIT. Below is the script for the same, but that script only search for the single name. from bluetooth import * target_name = "XT1033" target_address = None nearby_devices = discover_devices() for address in nearby_devices: if target_name == lookup_name( address ): target_address = address break if target_address is not None: print "found target bluetooth device with address ", target_address connect_socket(target_address); else: print "could not find target bluetooth device nearby" iii) Connect the device using client sock. But i dont have any device on which i can write a simple python script. My client can be any device that will be publishing data. Now i came through a script in the same book, that actually connect to a client requesting permission to connect to server. from bluetooth import * port = 1 server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",port)) server_sock.listen(1) client_sock, client_info = server_sock.accept() print "Accepted connection from ", client_info data = client_sock.recv(1024) print "received [%s]" % data client_sock.close() server_sock.close() here client_sock, client_info = server_sock.accept() provide the client address and port requested to be connected. Can i pass address obtained from the earlier script to this, so that it connect server to the client? iv) Then if client get disconnected, re-connect(a simple polling can be used.) All this stuff can be done using bash and py-bluez functions but i want to do that in a sequential manner.I am not a master in python but i can do some small stuff. Can any one guide me for the same or can direct me to more usefull resource through which i can continue my coding part after finding the "X", "Y" named devices.

    Read the article

  • Getting this CSS to work in IE6

    - by jerrygarciuh
    Hi folks, Working on this page: http://www.karlsenner.dreamhosters.com/about.php and having trouble with the navigation in IE6. It validates as XHTML 1.0 Transitional. Works great in FF, IE 8, Chrome, and Windows Safari. In IE6 and Opera 10 the drop menus appear too high. I tried adding in the different versions of http://code.google.com/p/ie7-js/ but it did not solve the issue in IE. The CSS looks like this: #wrapper { position: relative; display: block; background-color: inherit; margin: 0px auto; padding: 0; width: 900px; min-height: 900px; } #nav {} .navImage { position:relative; display:inline; height:102px; /* added in hopes of helping IE position but no dice */ } .subMenu { position:absolute; z-index:10; background-color:#FFF; top: 14px; left:0; } .subMenu a:link, .subMenu a:visited, .subMenu a:active{ display:block; width:90%; padding:6px; margin:0; color:#3CF; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } .subMenu a:hover{ display:block; width:90%; padding:6px; margin:0; color:#3CF; background-color:#CCC; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } jQuery rollovers: $('#navcompany').hover(function () { $('#companyMenu').css('display', 'block'); $('#companyImg').attr('src','g/nav/company_over.gif'); }, function () { $('#companyMenu').css('display', 'none'); $('#companyImg').attr('src','g/nav/company.gif'); }); And one of the cells. Since the menu is coming out of PHP and IE was not respecting the widths I just use PHP to get the nav image widths and write them to styles on the fly. Solved the width issue as IE acted like they should inherit their width from the wrapper. This may be a clue as to why they don't appear below their nav images but I can't sort it. <div id="navcompany" class="navImage" style="width:128px"> <a href="about.php"> <img src="g/nav/company_over.gif" name="companyImg" width="128" height="102" border="0" id="companyImg" alt="company" /> </a> <div id="companyMenu" class="subMenu" style="display:none; width:128px"> <a href="about.php">About us</a> <a href="location.php">Our location</a> </div> </div> Any advice greatly appreciated! JG

    Read the article

  • Understanding Node.js and concept of non-blocking I/O

    - by Saif Bechan
    Recently I became interested in using Node.js to tackle some of the parts of my web-application. I love the part that its full JavaScript and its very light weight so no use anymore to call an JavaScript-PHP call but a lighter JavaScript-JavaScript call. I however do not understand all the concepts explained. Basic concepts Now in the presentation for Node.js Ryan Dahl talks about non-blocking IO and why this is the way we need to create our programs. I can understand the theoretical concept. You just don't wait for a response, you go ahead and do other things. You make a callback for the response, and when the response arrives millions of clock-cycles later, you can fire that. If you have not already I recommend to watch this presentation. It is very easy to follow and pretty detailed. There are some nice concepts explained on how to write your code in a good manner. There are also some examples given and I am going to work with the basic example given. Examples The way we do thing now: puts("Enter your name: "); var name = gets(); puts("Name: " + name); Now the problem with this is that the code is halted at line 1. It blocks your code. The way we need to do things according to node puts("Enter your name: "); gets(function (name) { puts("Name: " + name); }); Now with this your program does not halt, because the input is a function within the output. So the programs continues to work without halting. Questions Now the basic question I have is how does this work in real-life situations. I am talking here for the use in web-applications. The application I am writing does I/O, bit is still does it in am blocking matter. I think that most of the time, if not all, you need to block, because you have to wait on what the response is you have to work with. When you need to get some information from the database, most of the time this data needs to be verified before you can further with the code. Example 1 If you take a login for example. You have to wait for the database to response to return, because you can not do anything else. I can't see a way around this without blocking. Example 2 Going back to the basic example. The use just request something from a database which does not need any verification. You still have to block because you don't have anything to do more. I can not come up with a single example where you want to do other things while you wait for the response to return. Possible answers I have read that this frees up recourses. When you program like this it takes less CPU or memory usage. So this non-blocking IO is ONLY meant to free up recourses and does not have any other practical use. Not that this is not a huge plus, freeing up recourses is always good. Yet I fail to see this as a good solution. because in both of the above examples, the program has to wait for the response of the user. Whether this is inside a function, or just inline, in my opinion there is a program that wait for input. Resources I looked at I have looked at some recourses before I posted this question. They talk a lot about the theoretical concept, which is quite clear. Yet i fail to see some real-life examples where this is makes a huge difference. Stackoverflow: What is in simple words blocking IO and non-blocking IO? Blocking IO vs non-blocking IO; looking for good articles tidy code for asynchronous IO Other recources: Wikipedia: Asynchronous I/O Introduction to non-blocking I/O The C10K problem

    Read the article

  • Validatation error "Value Error : background-position Too many values or values are not" How to so

    - by metal-gear-solid
    Why validation giving this error. How to solve? ul#navigation li#navigation-3 a.current Value Error : background-position Too many values or values are not recognized : -164px -164px -36px -164px -164px -36px This is error screen. CSS ul#navigation { height: 36px; left: 300px; list-style-image: none; list-style-position: outside; list-style-type: none; position: relative; top: 74px; width: 603px; } ul#navigation li { display: inline; } ul#navigation li a { height: 36px; float: left; text-decoration: none; } ul#navigation li a:link, ul#navigation li a:visited {font-family:Arial; color:#595959; font-size:1.1em; font-weight:bold } ul#navigation li a:hover, ul#navigation li a:active {color:#404040} ul#navigation li a span { display:block; float:left; padding-left:8px; padding-top:14px;} ul#navigation li#navigation-1 a { width: 53px; background: url(../images/menu-sprite.jpg) no-repeat 0px 0; } ul#navigation li#navigation-1 a:active, ul#navigation li#navigation-1 a:hover { background-position: 0px -36px; } ul#navigation li#navigation-1 a.current { background-position: 0px 0px -36px; } ul#navigation li#navigation-2 a { width: 111px; background: url(../images/menu-sprite.jpg) no-repeat -53px 0; } ul#navigation li#navigation-2 a:active, ul#navigation li#navigation-2 a:hover { background-position: -53px -36px; } ul#navigation li#navigation-2 a.current { background-position: -53px -53px -36px; } ul#navigation li#navigation-3 a { width: 78px; background: url(../images/menu-sprite.jpg) no-repeat -164px 0; } ul#navigation li#navigation-3 a:active, ul#navigation li#navigation-3 a:hover { background-position: -164px -36px; } ul#navigation li#navigation-3 a.current { background-position: -164px -164px -36px; } ul#navigation li#navigation-4 a { width: 100px; background: url(../images/menu-sprite.jpg) no-repeat -242px 0; } ul#navigation li#navigation-4 a:active, ul#navigation li#navigation-4 a:hover { background-position: -242px -36px; } ul#navigation li#navigation-4 a.current { background-position: -242px -242px -36px; } ul#navigation li#navigation-5 a { width: 88px; background: url(../images/menu-sprite.jpg) no-repeat -342px 0; } ul#navigation li#navigation-5 a:active, ul#navigation li#navigation-5 a:hover { background-position: -342px -36px; } ul#navigation li#navigation-5 a.current { background-position: -342px -342px -36px; } ul#navigation li#navigation-6 a { width: 96px; background: url(../images/menu-sprite.jpg) no-repeat -430px 0; } ul#navigation li#navigation-6 a:active, ul#navigation li#navigation-6 a:hover { background-position: -430px -36px; } ul#navigation li#navigation-6 a.current { background-position: -430px -430px -36px; } ul#navigation li#navigation-7 a { width: 77px; background: url(../images/menu-sprite.jpg) no-repeat -526px 0; } ul#navigation li#navigation-7 a:active, ul#navigation li#navigation-7 a:hover { background-position: -526px -36px; } ul#navigation li#navigation-7 a.current { background-position: -526px -526px -36px; }

    Read the article

  • ASP.NET Membership API not working on Win2008 server/IIS7

    - by Program.X
    I have a very odd problem. I have a web app that uses the .NET Membership API to provide login functionality. This works fine on my local dev machine, using WebDev 4.0 server. I'm using .NET 4.0 with some URL Rewriting, but not on the pages where login is required. I have a Windows Server 2008 with IIS7 However, the Membership API seemingly does not work on the server. I have set up remote debugging and the LoginUser.LoggedIn event of the LoginUser control gets fired okay, but the MembershipUser is null. I get no answer about the username/password being invalid so it seems to be recognising it. If I enter an invalid username/password, I get an invalid username/password response. Some code, if it helps: <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="validation-error-list" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend>Account Information</legend> <p> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">Username:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="validation-error" Display="Dynamic" ErrorMessage="User Name is required." ToolTip="User Name is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="validation-error" Display="Dynamic" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="login-action"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" CssClass="submitButton" Text="Log In" ValidationGroup="LoginUserValidationGroup"/> </p> and the code behind: protected void Page_Load(object sender, EventArgs e) { LoginUser.LoginError += new EventHandler(LoginUser_LoginError); LoginUser.LoggedIn += new EventHandler(LoginUser_LoggedIn); } void LoginUser_LoggedIn(object sender, EventArgs e) { // this code gets run so it appears logins work Roles.DeleteCookie(); // this behaviour has been removed for testing - no difference } void LoginUser_LoginError(object sender, EventArgs e) { HtmlGenericControl htmlGenericControl = LoginUser.FindControl("errorMessageSpan") as HtmlGenericControl; if (htmlGenericControl != null) htmlGenericControl.Visible = true; } I have "Fiddled" with the Login form reponse and I get the following Cookie-Set headers: Set-Cookie: ASP.NET_SessionId=lpyyiyjw45jjtuav1gdu4jmg; path=/; HttpOnly Set-Cookie: .ASPXAUTH=A7AE08E071DD20872D6BBBAD9167A709DEE55B352283A7F91E1066FFB1529E5C61FCEDC86E558CEA1A837E79640BE88D1F65F14FA8434AA86407DA3AEED575E0649A1AC319752FBCD39B2A4669B0F869; path=/; HttpOnly Set-Cookie: .ASPXROLES=; expires=Mon, 11-Oct-1999 23:00:00 GMT; path=/; HttpOnly I don't know what is useful here because it is obviously encrypted but I find the .APXROLES cookie having no value interesting. It seems to fail to register the cookie, but passes authentication

    Read the article

  • Extending XHTML

    - by Daniel Schaffer
    I'm playing around with writing a jQuery plugin that uses an attribute to define form validation behavior (yes, I'm aware there's already a validation plugin; this is as much a learning exercise as something I'll be using). Ideally, I'd like to have something like this: Example 1 - input: <input id="name" type="text" v:onvalidate="return this.value.length > 0;" /> Example 2 - wrapper: <div v:onvalidate="return $(this).find('[value]').length > 0;"> <input id="field1" type="text" /> <input id="field2" type="text" /> <input id="field3" type="text" /> </div> Example 3 - predefined: <input id="name" type="text" v:validation="not empty" /> The goal here is to allow my jQuery code to figure out which elements need to be validated (this is already done) and still have the markup be valid XHTML, which is what I'm having a problem with. I'm fairly sure this will require a combination of both DTD and XML Schema, but I'm not really quite sure how exactly to execute. Based on this article, I've created the following DTD: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; <!ENTITY % Inlspecial.extra "%div.qname; " > <!ENTITY % xhmtl-model.mod SYSTEM "formvalidation-model-1.mod" > <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; And here is "formvalidation-model-1": <!ATTLIST %div.qname; %onvalidation CDATA #IMPLIED %XHTML1-formvalidation1.xmlns.extra.attrib; > I've never done DTD before, so I'm not even really exactly sure what I'm doing. When I run my page through the W3 XHTML validator, I get 80+ errors because it's getting duplicate definitions of all the XHTML elements. Am I at least on the right track? Any suggestions? EDIT: I removed this section from my custom DTD, because it turned out that it was actually self-referencing, and the code I got the template from was really for combining two DTDs into one, not appending specific items to one: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; I also removed this, because it wasn't validating, and didn't seem to be doing anything: <!ENTITY % Inlspecial.extra "%div.qname; " > Additionally, I decided that since I'm only adding a handful of additional items, the separate files model recommended by W3 doesn't really seem that helpful, so I've put everything into the dtd file, the content of which is now this: <!ATTLIST div onvalidate CDATA #IMPLIED> <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; So now, I'm not getting any DTD-related validation errors, but the onvalidate attribute still is not valid. Update: I've ditched the DTD and added a schema: http://schema.dandoes.net/FormValidation/1.0.xsd Using v:onvalidate appears to validate in Visual Studio, but the W3C service still doesn't like it. Here's a page where I'm using it so you can look at the source: http://new.dandoes.net/auth And here's the link to the w3c validation result: http://validator.w3.org/check?uri=http://new.dandoes.net/auth&charset=(detect+automatically)&doctype=Inline&group=0 Is this about as close as I'll be able to get with this, or am I still doing something wrong?

    Read the article

  • Why are these divs not aligned and space between?

    - by acidzombie24
    Why isnt everything aligned? No yellow should be visible and no orange should be visible except for the right side and bottom left where theres space for another image. Basically my images are pretty much aligned to the center (i have other pics not in this example which is easier to see). However in this case when i have 150px height image the 150 width seems start lower. Also why are there spaces in between <!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"><head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>ldfk;sd</title> <style type="text/css"> div.ImgGallery { max-width: 630px; background: orange; } .ImgGallery div { display: inline; } /* http://www.brunildo.org/test/img_center.html */ .ImgGallery div div { display: table-cell; text-align: center; background: gray; width: 150px; height: 150px; } .ImgGallery div{ background: yellow; vertical-align: middle; } //.ImgGallery div div :nth-child(2n+1) { background: red; } .ImgGallery * { vertical-align: middle; } .ImgGallery a { display: block; } .ImgGallery a * { border-style: none; } </style> </head> <div class="smallGallery"> <div class="ImgGallery"> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="b.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="b.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> <div><div><a href="http://google.com"><img src="a.jpg" alt="a.jpg"></a></div></div> </div></div> </body></html>

    Read the article

  • css sticky footer without scrolbar

    - by massinissa
    How to do to avoid having the scroller with a sticky footer to the bottom of the page (not bottom of window)? When I remove height=100% from content and sidebar, I'm no more getting the scroller. However, when doing so, my content and sidebar do not fill all the space down to the footer. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type" /> <title>Untitled 13</title> <style media="all" type="text/css"> * { margin: 0; padding: 0; } html, body, #wrap, form { height: 100%; } #wrap, #footer { width: 750px; margin: 0 auto; } #wrap { background: #cff; } html, body { color: #000; background: #a7a09a; } body > #wrap { height: 100%; min-height: 100%; } form { /*height: auto;*/ min-height: 100%; } #main { background: #000; height:100%; min-height:100%; height: auto !important; */ } #content { height:100%; float: left; padding: 10px; float: left; width: 570px; background: #9c9; } #sidebar { height:100%; float: left; width: 140px; background: #c99; padding: 10px; } #footer { position: relative; margin-top: -100px; height: 100px; clear: both; background: #cc9; bottom: 0; } .clearfix:after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } .clearfix { display: inline-block; } * html .clearfix { height: 1%; } .clearfix { display: block; } #header { /*padding: 5px 10px;*/ background: #ddd; } </style> </head> <body> <form id="form1" runat="server"> <div id="wrap"> <div id="main" class="clearfix"> <div id="header"> <h1>header</h1> </div> <div id="sidebar"> <h2>sidebar</h2> </div> <div id="content"> <h2>main content</h2> </div> </div> </div> <div id="footer"> <h2>footer</h2> </div> </form> </body> </html>

    Read the article

  • Run javascript after form submission in update panel?

    - by AverageJoe719
    This is driving me crazy! I have read at least 5 questions on here closely related to my problem, and probably 5 or so more pages just from googling. I just don't get it. I am trying to have a jqueryui dialog come up after a user fills out a form saying 'registration submitted' and then redirecting to another page, but I cannot for the life of me get any javascript to work, not even a single alert. Here is my update panel: <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <asp:UpdatePanel ID="upForm" runat="server" UpdateMode="Conditional" ChildrenAsTriggers="False"> <ContentTemplate> 'Rest of form' <asp:Button ID="btnSubmit" runat="server" Text="Submit" /> <p>Did register Pass? <%= registrationComplete %></p> </ContentTemplate> </asp:UpdatePanel> The Jquery I want to execute: (Right now this is sitting in the head of the markup, with autoOpen set to false) <script type="text/javascript"> function pageLoad() { $('#registerComplete').dialog({ autoOpen: true, width: 270, resizable: false, modal: true, draggable: false, buttons: { "Ok": function() { window.location.href = "someUrl"; } } }); } </script> Finally my code behind: ( Commented out all the things I've tried) Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSubmit.Click 'Dim sbScript As New StringBuilder()' registrationComplete = True registrationUpdatePanel.Update() 'sbScript.Append("<script language='JavaScript' type='text/javascript'>" + ControlChars.Lf)' 'sbScript.Append("<!--" + ControlChars.Lf)' 'sbScript.Append("window.location.reload()" + ControlChars.Lf)' 'sbScript.Append("// -->" + ControlChars.Lf)' 'sbScript.Append("</")' 'sbScript.Append("script>" + ControlChars.Lf)' 'ScriptManager.RegisterClientScriptBlock(Me.Page, Me.GetType(), "AutoPostBack", sbScript.ToString(), False)' 'ClientScript.RegisterStartupScript("AutoPostBackScript", sbScript.ToString())' 'Response.Write("<script type='text/javascript'>alert('Test')</script>")' 'Response.Write("<script>windows.location.reload()</script>")' End Sub I've tried: Passing variables from server to client via inline <%= % in the javascript block of the head tag. Putting that same code in a script tag inside the updatePanel. Tried to use RegisterClientScriptBlock and RegisterStartUpScript Just doing a Response.Write with the script tag written in it. Tried various combinations of putting the entire jquery.dialog code in the registerstartup script, or just trying to change the autoOpen property, or just calling "open" on it. I can't even get a simple alert to work with any of these, so I am doing something wrong but I just don't know what it is. Here is what I know: The Jquery is binding properly even on async postbacks, because the div container that is the dialog box is always invisible, I saw a similiar post on here stating that was causing an issue, this isn't the case here. Using page_load instead of document.ready since that is supposed to run on both async and normal postbacks, so that isn't the issue. The update panel is updating correctly because <p>Did register Pass? <%= registrationComplete %></p> updates to true after I submit the form. So how can I make this work? All I want is - click submit button inside an update panel - run server side code to validate form and insert into db - if everything succeeded, have that jquery (modal) dialog pop up saying hey it worked.

    Read the article

  • Why is my PHP query executing twice on page load?

    - by user1826238
    I am newish to PHP and I seem to be having an issue with an insert statement that executes twice when I open this page to view a document. In the database the 2nd insert is 1 second later. It happens in google chrome only and on this page only. IE has no issue, I dont have firefox to check. view_document.php <?php require_once($_SERVER['DOCUMENT_ROOT'] . '/../includes/core.php'); require_once($_SERVER['DOCUMENT_ROOT'] . '/../includes/connect.php'); $webusername = $_SESSION['webname']; if (isset($_GET['document'])) { $ainumber = (int) $_GET['document']; if (!ctype_digit($_GET['document']) || !preg_match('~^[0-9]+$~',$_GET['document']) || !is_numeric($_GET['document'])) { $_SESSION = array(); session_destroy(); header('Location: login.php'); } else { $stmt = $connect->prepare("SELECT s_filename, s_reference FROM dmsmain WHERE s_ainumber = ?") or die(mysqli_error()); $stmt->bind_param('s', $ainumber); $stmt->execute(); $stmt->bind_result($filename, $reference); $stmt->fetch(); $stmt->close(); $file = $_SERVER['DOCUMENT_ROOT'] . '/../dms/files/'.$filename.'.pdf'; if (file_exists($file)) { header('Content-Type: application/pdf'); header('Content-Disposition: inline; filename='.basename($file)); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . filesize($file)); header('Accept-Ranges: bytes'); readfile($file); $stmt = $connect->prepare("INSERT INTO dmslog (s_reference, s_userid, s_lastactivity, s_actiontype) VALUES (?, ?, ?, ?)") or die(mysqli_error()); date_default_timezone_set('Africa/Johannesburg'); $date = date('Y-m-d H:i:s'); $actiontype = 'DL'; $stmt->bind_param('ssss', $reference, $webusername, $date, $actiontype); $stmt->execute(); $stmt->close(); } else { $missing = "<b>File not found</b>"; } } } ?> My HTTP access records I assume [15/Nov/2012:10:14:32 +0200] "POST /dms/search.php HTTP/1.1" 200 5783 "http://www.denso.co.za/dms/search.php" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:33 +0200] "GET /favicon.ico HTTP/1.1" 404 - "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:34 +0200] "GET /dms/view_document.php?document=8 HTTP/1.1" 200 2965 "http://www.denso.co.za/dms/search.php" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:35 +0200] "GET /favicon.ico HTTP/1.1" 404 - "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" I have checked my <img src=''> links and I dont see a problem with them. The records indictate there is a favicon.ico request so I created a blank favicon and placed it in my public_html folder and linked it in the page like so <link href="../favicon.ico" rel="shortcut icon" type="image/x-icon" /> Unfortunately that did not work as the statement still executes twice. I am unsure if it is a favicon issue as my upload page uses an insert query and it executes once. If someone could please tell me where I am going wrong or point me in the right direction I would be very grateful

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >