Search Results

Search found 10866 results on 435 pages for 'chrome extension'.

Page 161/435 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • SSRS Subscription Fails

    - by Chad
    Our SSRS server is not executing a subscription correctly. (Only subscription we have btw) We created a subscription to export a report as an excel file to the file system. Tried running the job that gets generated, and this error happens 'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK SERVICE'. The step failed. It's not the most helpful in tracking down what exactly it was trying to do. EDIT Digging further into the logs I also get these errors w3wp!extensionfactory!f!7/30/2010-14:29:26:: w WARN: The extension Report Server FileShare does not have a LocalizedNameAttribute. w3wp!extensionfactory!11!7/30/2010-14:34:48:: w WARN: The extension Report Server Email does not have a LocalizedNameAttribute.

    Read the article

  • How many pHPShield loaders do I need to install

    - by Amit
    An application asks me to install an old pHPshield version but prior to that it asks that I delete all pHPshield loaders in the php extension_dir directory. Am planning to encode some of my own php files with the newer pHPSHIELD version (8+) so I need to also upload newer loaders, but am not sure if it's ok to have multiple pHPSHIELD loaders in the extension dir. Can someone please clarify this confusion? My server runs php 5.2.14 with phpSHIELD Loader Version 5.0.1 and an i386 structure on Centos. The pHPSHILED demo created me a bunch of folders containing the loaders(files that end with .lin extension. I assume the folder Linux_x86-32 is the correct one for my server structure and it contains files like ixed.5.0.1.lin . Can I upload these next to the existing one in the extension_dir directory? Thank you

    Read the article

  • Can I alias all directory requests to a single file in nginx?

    - by user749618
    I'm trying to figure out how to take all requests made to a particular directory and return a json string without a redirect, in nginx. Example: curl -i http://example.com/api/call1/ Expected result: HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/json Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Here's what I have so far in my nginx conf: location ~ ^/api/(.*)$ { index /api_logout.json; alias /path/to/file/api_logout.json; types { } default_type "application/json; charset=utf-8"; break; } However, when I try to make the request the Content-Type doesn't stick: $ curl -i http://example.com/api/call1/ HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/octet-stream Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Is there a better way to do this? How can I get the application/json type to stick?

    Read the article

  • Prefix files with the current directory name using Powershell

    - by XST
    I have folders with images (*.png and *.jpg) >C:\Directory\Folder1 01.png 02.png 03.jpg 04.jpg 05.png And I want to rename all the files like this using powershell: >C:\Directory\Folder1 Folder1 - 01.png Folder1 - 02.png Folder1 - 03.jpg Folder1 - 04.jpg Folder1 - 05.png So I came up with this simple line: Get-ChildItem | Where-Object { $_.Extension -eq ".jpg" -or $_.Extension -eq ".png"} | rename-item -newname {$_.Directory.Name +" - " + $_.Name} If I have 35 or less files in the folder, I will have the wanted result, but if there is 36 or more files, I will end up with this: >C:\Directory\Folder1 Folder1Folder1Folder1 - 01.png Folder1Folder1Folder1 - 02.png Folder1Folder1Folder1 - 03.jpg Folder1Folder1Folder1 - 04.jpg Folder1Folder1Folder1 - 05.png The loop stops when the file's name exceeds 248 characters. Any ideas why it's looping?

    Read the article

  • Amazon EC2: Not able to open web application even if port it opened

    - by learner
    I have a t1.micro instance with public dns looks similar to ec2-184-72-67-202.compute-1.amazonaws.com (some numbers changed) On this machine, I am running a django app $ sudo python manage.py runserver --settings=vlists.settings.dev Validating models... 0 errors found Django version 1.4.1, using settings 'vlists.settings.dev' Development server is running at http://127.0.0.1:8000/ I have opened the port 8000 through AWS console Now when I hit the following in Chrome http://ec2-184-72-67-202.compute-1.amazonaws.com:8000, I get Oops! Google Chrome could not connect to WHat is that I am doing wrong?

    Read the article

  • Certain SFTP user cannot connect

    - by trobrock
    I have my Ubuntu Server set up so users with the group of sftponly can connect with sftp, but have a shell of /bin/false, and they connect to their home directories. This is working fine with three of the user accounts I have. But I added a new user account today the same way that I added the others and it will not successfully connect. sftp -vvv user@hostname debug1: Next authentication method: password user@hostname's password: debug3: packet_send2: adding 48 (len 73 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t3 r-1 i0/0 o0/0 fd 5/6 cfd -1) debug3: channel 0: close_fds r 5 w 6 e 7 c -1 debug1: fd 0 clearing O_NONBLOCK debug3: fd 1 is not O_NONBLOCK Connection to hostname closed by remote host. Transferred: sent 2176, received 1848 bytes, in 0.0 seconds Bytes per second: sent 127453.3, received 108241.6 debug1: Exit status -1 Connection closed For a successful user: sftp -vvv good_user@hostname debug1: Next authentication method: password good_user@hostname's password: debug3: packet_send2: adding 48 (len 63 padlen 17 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending subsystem: sftp debug2: channel 0: request subsystem confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: subsystem request accepted on channel 0 debug2: Remote version: 3 debug2: Server supports extension "[email protected]" revision 1 debug2: Server supports extension "[email protected]" revision 2 debug2: Server supports extension "[email protected]" revision 2 debug3: Sent message fd 3 T:16 I:1 debug3: SSH_FXP_REALPATH . -> / sftp> I cannot figure out why one user will work and the other wont, I have restart the ssh service after adding the user. I have even removed the user and added them again to be sure I am adding it correctly.

    Read the article

  • How to prevent yourself from commenting on websites?

    - by MHH
    There is a bunch of browser add ons to either block particular websites (i.e. leechblock, chrome nanny or various OS specific solutions) or block the comment section of a website (i.e. commentBlocker). However, what if you want to be able to read the comment section of all websites, but want to never be able to add comments yourself, on particular sites? Is there anything that will allow this? I'm particularly interested in answers that will work for both windows and mac, and will also work for google chrome, firefox, and safari (note they can be different solutions for each browser/operating system)

    Read the article

  • vector quality of svg and pdf

    - by Kasper
    I'm converting pdf files to svg as it is easier to use svg files on webpages. I first thought the quality of svg must be similar to pdf, as they are both vector graphics. However, now I look a little better on it, it seems that pdf is a bit superior: (https://dl.dropboxusercontent.com/u/58922976/Photos/1.png) I wonder if I could change this in some way. Is this because pdf vectors are just better quality ? Or is this because chrome renders svg in lower quality than adobe reader renders pdf ? Is this a setting in the svg file that I could change ? Here is the pdf file: https://dl.dropboxusercontent.com/u/58922976/syllabusLinAlg2012.59.pdf And here is the svg file: (https://dl.dropboxusercontent.com/u/58922976/syllabusLinAlg2012.59.svg) I've made this svg file in illustrator, and only chrome is able to use the embedded svg fonts. So firefox and internet explorer won't give the expected result.

    Read the article

  • How to change the fonts used in Foxit Reader?

    - by user982438
    I really want to use Foxit Reader as my default PDF reader for its Tab Interface, and low memory footprint. Having tried mutiple times over the years the only thing that is stopping me are the Fonts. PDF in Adobe Reader looks Regular, but PDF Rendered in Foxit Looks Ugly. Would this solution works in Chrome as well? Since i record Chrome is using Foxit Reader as its embedded PDF engine. And is there any reasons why Fonts are used differently in Adobe and Foxit Reader?

    Read the article

  • My PNG has transparency, but after saving with PHP GD, transparency is lost [closed]

    - by Harry Stroker
    I found the solution to my problem. See below the original post and completely at the bottom my solution. I made a stupid mistake :) First I crop an image and then save it to a png file. Right after this, I also show the image. However, the saved png does not have transparency and the shown one has. What is going on? $this->resource = imagecreatefrompng($this->url); imagealphablending($this->resource, false); imagesavealpha($this->resource, true); $newResource = imagecreatetruecolor($destWidth, $destHeight); imagealphablending($newResource, false); imagesavealpha($newResource, true); $resample = imagecopyresampled($newResource,$this->resource,0,0,$srcX1,$srcY1,$destWidth,$destHeight,$srcX2-$srcX1, $srcY2-$srcY1); imagedestroy($this->resource); $this->resource = $newResource; // SAVING imagepng($this->resource, $destination, 100); // SHOWING header('Content-type: image/png'); imagepng($this->resource); The reason I also save the image is for caching. If the script is executed on a png, it saves a cached png. Next time the image is requested, the png file will be shown, but it has lost its transparency. Even stranger: When I save that cached png image as (within Firefox), it saves it suddenly as a jpg, even though the extension was png. Downloading the cached png using chrome and opening it in Photoshop gives the error: "file-format module cannot parse the file". I will show you the shown PNG and the generated PNG: http://www.foodmuseum.nl/SaveProblemTransparency.png Once I try to show that saved PNG with the GD library, it gives me an error. EDIT NO NO NO NO THIS IS NOT A DUPLICATE!!!... I ALREADY USED THEIR SOLUTION. The solution in the supposedly duplicate works for showing my image. But I also try to save it with the exact same resource, but then it has no transparency. EDIT 2 - SOLUTION I found out what the problem was. It was a stupid mistake. The script I provided above were cut out of a class and placed as sequential code, while in real this is not what exactly happened. The save image function: function saveImage($destination,$quality = 90) { $this->loadResource(); switch($extension){ default: case 'JPG': case 'jpg': imagejpeg($this->resource, $destination, $quality); break; case 'gif': imagegif($this->resource, $destination); break; case 'png': imagepng($this->resource, $destination); break; case 'gd2': imagegd2($this->resource, $destination); break; } } However... $extension does not exist. I fixed it by adding: $extension = $this->getExtension($destination);

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Is there a Windows 7 add-on that will put the PID in the title bar of a window?

    - by Chris
    Occasionally I run many instances of something, like Chrome or Visual Studio. Rarely, but often enough to bug me, one of them gets hosed and starts to consume 100% CPU. I can fire up the task manager to see which process is using 100%, but if it just says chrome.exe or devenv.exe, I don't know which window is the culprit. I'd like to know before terminating the process, so I can activate the app and shut it down cleanly. The best I've found so far is to use Process Explorer's feature where I can right click a process and say "bring to front". But I am curious as to whether there is an app that will put the PID(s) right in the title bar of the window so I can tell which window matches the process. I am using Windows 7 64-bit.

    Read the article

  • GIMP Slow Startup

    - by muntoo
    Is there any way to speed up GIMP's startup time on Windows Vista Home Premium 32-Bit 1.6 [Dual] Intel Processors? On XP [different computer], it loads in less than 3 seconds. On Vista, it takes 20 seconds: 2 Seconds (other - fonts, brushes, etc) 18 Seconds (extension-script-fu) It just freezes at extension-script-fu. Looking at ProcessExplorer (or Task Manager, whatever), I see that it's not taking any CPU. EDIT: it does seem to be taking 50% of the CPU. It gets stuck for about 18 seconds, then starts working again, and the actual GIMP program pops up [...finally]. I have the latest stable version running (I think). I tried it with XP SP2 Compatibiliy mode and/or Run As Administrator, but that didn't help. EDIT: One way would be to disable script-fu. Does anyone know how to disable it at startup? (NOTE: Just wanted to point out that the title and the tags are the same. :D )

    Read the article

  • generate correctly a self signed certificate Zimbra

    - by rkmax
    I have a Single mail server with Zimbra 8.0.0 for generate certificate I'm following Generate the cert. ORG=MyOrganization CN=mail.mydomain.com COUNTRY=myCountry CITY=myCity /opt/zimbra/bin/zmcertmgr createcrt -new -days 365 -subject "/C=$COUNTRY/ST=N/A/L=$CITY/O=$ORG/OU=ZCS/CN=$CN" /opt/zimbra/bin/zmcertmgr deploycrt self -allserver su - zimbra "zmcontrol restart" Veririficate with /opt/zimbra/bin/zmcertmgr viewdeployedcrt. i can see the new cert In Chrome go to https://mail.mydomain.com and export the .cer test in a Windows client certutil.exe -addstore root \path\to\exported.cert root "Root Certification Authorities trusted" You can add a root certificate to the root store CertUtil:-addstore command error: 0x8007000d (WIN32: 13) CertUtil: Invalid data. even from chrome i've tried to add the cert without successful results. can anyone help me with this problem?

    Read the article

  • Firefox: This connection is untrusted + Behind corporate firewall

    - by espais
    I've seen some similar issues strewn throughout Google's results about this, but none seem to be corporate-specific. I continually get the 'This connection is untrusted' screen every time I attempt to log into a secure site...for instance Gmail. This is pretty annoying as sometimes I have to go through the process of adding the exception two or three times before it finally lets me into Gmail. I am behind a corporate firewall, going through an internal proxy server to get to the Internet, so there is no possibility for me to update the firewall...etc. Does anybody know a way around this? Can it simply be disabled (and is that safe)? EDIT I'm going to reopen this question with a bit of new information. I have been using Google Chrome lately until today, and one thing that I noticed was that I never had this issue when using either Chrome or Internet Explorer. Is there something that these other browsers do that I need to manually do in FF?

    Read the article

  • File downloaded from IIS6/Win2003 server to a Mac (and not PCs) is incredibly slow

    - by Simon Swords
    We have a test zip file on our customers server that we host for him that when downloading to a Mac is incredibly slow. On a Mac - trying the download via Safari 5.0.3 and Chrome 8.0.552.231 results in a quick burst of normal download speed then plummets to almost no speed at all after 1 or 2 meg (between 1 and 5 Kb/s - yes, KiloBITS per second! According to the network monitor). Downloading via Windows was fine and speedy. Tested via; IE7 7.0.5730.13 and Chrome Portable 8.0.552.224 On Windows XP Pro, and; IE8 8.0.7600.16385 in a Windows 7 virtual machine running via VirtualBox 4.0.0 r69151 on the same Mac mention above Google hasn't helped us out on this occasion, possibly because the search terms I'm having to use are quite generic. Has anybody ever experienced this and if so how do we fix it? Thanks in advance

    Read the article

  • using xml type attribute for derived complex types

    - by David Michel
    Hi All, I'm trying to get derived complex types from a base type in an xsd schema. it works well when I do this (inspired by this): xml file: <person xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Employee"> <name>John</name> <height>59</height> <jobDescription>manager</jobDescription> </person> xsd file: <xs:element name="person" type="Person"/> <xs:complexType name="Person" abstract="true"> <xs:sequence> <xs:element name= "name" type="xs:string"/> <xs:element name= "height" type="xs:double" /> </xs:sequence> </xs:complexType> <xs:complexType name="Employee"> <xs:complexContent> <xs:extension base="Person"> <xs:sequence> <xs:element name="jobDescription" type="xs:string" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> However, if I want to have the person element inside, for example, a sequence of another complex type, it doesn't work anymore: xml: <staffRecord> <company>mycompany</company> <dpt>sales</dpt> <person xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Employee"> <name>John</name> <height>59</height> <jobDescription>manager</jobDescription> </person> </staffRecord> xsd file: <xs:element name="staffRecord"> <xs:complexType> <xs:sequence> <xs:element name="company" type="xs:string"/> <xs:element name="dpt" type="xs:string"/> <xs:element name="person" type="Person"/> <xs:complexType name="Person" abstract="true"> <xs:sequence> <xs:element name= "name" type="xs:string"/> <xs:element name= "height" type="xs:double" /> </xs:sequence> </xs:complexType> <xs:complexType name="Employee"> <xs:complexContent> <xs:extension base="Person"> <xs:sequence> <xs:element name="jobDescription" type="xs:string" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:sequence> </xs:complexType> </xs:element> When validating the xml with that schema with xmllint (under linux), I get this error message then: config.xsd:12: element complexType: Schemas parser error : Element '{http://www.w3.org/2001/XMLSchema}sequence': The content is not valid. Expected is (annotation?, (element | group | choice | sequence | any)*). WXS schema config.xsd failed to compile Any idea what is wrong ? David

    Read the article

  • Make windows vista file explorer act normally

    - by user25866
    Is there some file I can remove or something I can do to globally ensure that windows visa/xp/etc doesn't do annoying things? Annoying things: 1) Hide the file extension 2) All these "meta" columns I could care less about in "details" view (rating, album, date taken, Assistant's name, Artist, 35mm focal length, City, Other City, etc...). All I want are Name, size, date created, date modified, and file extension. MAYBE file chmod settings. 3) That garbage in the left pane known as "favorite links." (Documents, desktop, photos, music, etc...) 4) Switching between detail view, large icon view, thumbnail view, list view, and tiles when I goto differnt folders, all I want is detail view, with the same columns every time. That's it. I shouldn't have to get third party software to make my file system browseable, but if I need to so be it... Why are all these settings buried away? It feels like I have to apply them onto each folder every time.

    Read the article

  • No option to select ASP.Net Version in IIS 6?

    - by GenericTypeTea
    I'm running Windows Server 2003 64bit edition. I've just installed the .Net 4 Framework in order to get a new WCF service up and running. However, I have no options anywhere in IIS 6 for selecting the ASP.Net framework version. I.e. Right click Properties on the website should have as ASP.Net tab from where I should be able to select v2 or v4. Does anyone know why they're not there and how I can make them appear? For the time being I've had to go into Website Properties Home Directory Configuration and change the .svc extension to use v4.0.30319 instead. So, everything's now working for my WCF service, however every other extension is set to v2. How can I get the tab? It's not visible on any of my 23 websites.

    Read the article

  • How to set the PHP Api Version for phpize

    - by Tom Frost
    I'm upgrading php on my server but I'm running into a problem with phpize and compiling external modules. phpize -v reports: Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20090115 Zend Extension Api No: 220090115 But on my test server (which I'm trying to replicate) I get this: Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 I'm running debian squeeze, pulling the php 5.3.0-2 packages from the experimental repo. The difference betweent he two servers is that the first server has had old verisons of php on it, and the test server was installed with php 5.3.0-2 from the start. I've attempted uninstalling all PHP packages from the first server (using --purge to get rid of all the config files) and re-installing 5.3 fresh, but I'm still having the same issue. Help!

    Read the article

  • How to lock Firefox tab to domain or URL pattern

    - by f3lix
    I know Firefox extensions that allow protecting (cannot be closed) and locking (cannot change URL) tabs. What I need is an extension that locks a tab to a certain domain or URL pattern. For example, I want to lock a tab to the domain example.com. As long as I follow links that are within this domain the tab should show normal (unlocked) behavior, but if I follow a link to another domain the link should be opened in new tab -- leaving the locked tab open with a URL within the locked domain. Even better would be the functionality to lock a tab to a URL pattern. If a URL matches the pattern it is opened in the current tab, otherwise it is opened in a new tab. Do you know something (preferably an extension for FF 8.0) that provides this kind of functionality.

    Read the article

  • How do I get a file type to show up with a name I choose in Windows Explorer?

    - by Adrian
    I associated a file extension using the command assoc. But in the Explorer, it lists the type as the extension name. I.e. assoc .sh=ShellScript will still cause explorer to show the type as SH File. Anyway to change it so it shows up as ShellScript or better yet, Shell Script? EDIT: Using assoc didn't work. Seems to be something wrong with my registry. I figured that using quotes would put in a white space, but because it didn't show up in the explorer, I figured it may have been part of the problem.

    Read the article

  • Mac OS X Lion 10.7.2 update breaks SSL

    - by mcandre
    Summary After updating from 10.7.1 to 10.7.2, neither Safari nor Google Chrome can load GMail. Spinning Beachballs all around. The problem isn't GMail; Firefox loads GMail just fine. The problem isn't limited to Safari or Google Chrome; Other applications also have trouble with SSL: Gilgamesh and Safari. Any program that uses WebKit (Google Chrome, Safari) or a Cocoa library (Gilgamesh) to access the Internet has trouble loading secure sites. The various forums online suggest a handful of fixes, none of which work. Analysis Fix #1: Open Keychain Access.app and delete the Unknown certificate. The 10.7.2 update also prevents Keychain Access from loading. The Keychain program itself Spinning Beachballs. Fix #2: Delete ~/Library/Keychains/login.keychain and /Library/Keychains/System.keychain. This temporarily resolves the issue, and lets you load secure sites, but a minute or two after rebooting or hibernating somehow magically undoes the fix, so you have to delete these files over and over. Fix #3: Delete ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob*. There is a rumor that the new MobileMe/iCloud service ubd is causing the issue. This fix does not resolve the issue. Fix #4: Open Keychain Access, open the Preferences, and disable OCSP and CRL. This fix does not resolve the issue. Fix #5: Use the 10.7.0 - 10.7.2 combo installer, rather than the 10.7.1 - 10.7.2 installer. When I run the combo installer, it stays forever at the "Validating Packages..." screen. The combo installer itself is bugged to He||. I force-quit the installer, ran "sudo killall installd" to force-quit the background installer process, and reran the combo installer. Same problem: it stalls at "Validing Packages..." Recap The only fix that works is deleting the keychains, but you have to do this every time you reboot or wake from hibernate. There is some evidence that ubd continually corrupts the keychain files, but the suggested ubd fix of deleting ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob* does not resolve this issue. Evidently, something is corrupting the keychain over and over and over. Also posted on the Apple Support Communities.

    Read the article

  • PHP DL Function

    - by Pete Herbert Penito
    Is allowing dynamic extension loading dangerous for some reason? I ask because I need it to include the pecl oauth.so extension to make the Google Adwords PHP SDK work using dl(). I've tried all other alternatives but just can't get it to work: http://php.net/manual/en/function.dl.php enable_dl is set to off by default inside my php.ini, I enabled it, restarted apache and it works. If it's safe to use why is it disabled by default? I'm the only user with access to the server and it will be hosting a web application. Any advice would be helpful!

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >