Search Results

Search found 190 results on 8 pages for 'gov'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • Countdown to Transit of Venus and a List of Feeds

    - by TATWORTH
    At http://www.space.com/14568-venus-transits-sun-2012-skywatching.html there is a countdown to the transit of Venus.NASA will providing a video feed from Mauna Kea of the event from http://venustransit.nasa.gov/2012/transit/webcast.php.The SLOOH space camera site will provide a feed at http://www.slooh.com/transit-of-venus/Astronomers Without Borders will provide a feed from Mount Wilson at http://www.astronomerswithoutborders.org/projects/transit-of-venus.htmlOther web camera feeds are at:http://www.skywatchersindia.com/http://venustransit.nasa.gov/transitofvenus/http://venustransit.nso.edu/http://www.transitofvenus.com.au/HOME.htmlhttp://www.exploratorium.edu/venus/http://www.bareket-astro.com/live-astronomical-web-cast/live-free-venus-transit-webcast-6-june-2012.htmlhttp://cas.appstate.edu/streams/2012/05/physics-and-astronomy-astrocamhttp://skycenter.arizona.edu/

    Read the article

  • Load text from specific external DIV using AJAX?

    - by Josh
    I'm trying to load up the estimated world population from http://www.census.gov/ipc/www/popclockworld.html using AJAX, and so far, failing miserably. There's a DIV with the ID "worldnumber" on that page which contains the estimated population, so that's the only text I want to grab from the page. Here's what I've tried: $(document).ready(function(){ $("#population").load('http://www.census.gov/ipc/www/popclockworld.html #worldnumber *'); });

    Read the article

  • xslt (table) by matching the attribute value.

    - by Magesh
    i need to generate an xsl table for the xml below , for atrributes fname and lname.. i have done something worng in xpath i guess.could someone help me out writing an xsl table for the xml below.. <sparql> - <head> <variable name="s"/> <variable name="fname"/> <variable name="lname"/> </head> - <results> - <result> - <binding name="s"> <uri>http://tn.gov.in/Person/41</uri> </binding> - <binding name="fname"> <literal>Gayathri</literal> </binding> - <binding name="lname"> <literal>Vasudevan</literal> </binding> </result> - <result> - <binding name="s"> <uri>http://tn.gov.in/Person/43</uri> </binding> - <binding name="fname"> <literal>Vivek</literal> </binding> - <binding name="lname"> <literal>Vasudevan</literal> </binding> </result> - <result> - <binding name="s"> <uri>http://tn.gov.in/Person/37</uri> </binding> - <binding name="fname"> <literal>Magesh</literal> </binding> - <binding name="lname"> <literal>Vasudevan</literal> </binding> </result> - <result> - <binding name="s"> <uri>http://tn.gov.in/Person/39</uri> </binding> - <binding name="fname"> <literal>Vasudevan </literal> </binding> - <binding name="lname"> <literal>Srinivasan</literal> </binding> </result> </results>

    Read the article

  • XDocument unable to digest url in header if encountered twice

    - by Paul Connolly
    Hi there, I am consuming an xml response from a government gateway which contains a url in its root node twice (being firstly xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope" and also xmlns="http://www.govtalk.gov.uk/CM/envelope") XDocument will only parse this if I pull out the second one (the xmlns one) from the node. Is there some way I can prepare XDocument to digest this repeated URL without having to manipulate the incoming xml in any way? Thanks Paul

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • java.sql.Exception ClosedConnection

    - by john
    I am getting the following error: java.sql.SQLException: Closed Connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208) at oracle.jdbc.driver.PhysicalConnection.getMetaData(PhysicalConnection.java:1508) at com.ibatis.sqlmap.engine.execution.SqlExecutor.moveToNextResultsSafely(SqlExecutor.java:348) at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:320) at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:277) at com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:34) at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173) at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118) at org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268) at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193) at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219) at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266) at gov.hud.pih.eiv.web.authentication.AuthenticationUserDAO.isPihUserDAO(AuthenticationUserDAO.java:24) at gov.hud.pih.eiv.web.authorization.AuthorizationProxy.isAuthorized(AuthorizationProxy.java:125) at gov.hud.pih.eiv.web.authorization.AuthorizationFilter.doFilter(AuthorizationFilter.java:224) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246) at I am really stumped and can't figure out what could be causing this error. I am not able to reproduce the error on my machine but on production it is coming a lot of times. I am using iBatis in the whole application so there are no chances of my code not closing connections. We do have stored procedures that run for a long time before they return results (around 15 seconds). does anyone have any ideas on what could be causing this? I dont think raising the # of connections on the application server will fix this issue buecause if connections were running out then we'd see "Error on allocating connections"

    Read the article

  • How can I use Qt to get html code of the redirected page??

    - by Claire Huang
    I'm trying to use Qt to download the html code from the following url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=nucleotide&cmd=search&term=AB100362 this url will re-direct to www.ncbi.nlm.nih.gov/nuccore/27884304 I try to do it by following way, but I cannot get anything. it works for some webpage such as www.google.com, but not for this NCBI page. is there any way to get this page?? QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; } void GetGi() { int pos; QString sGetFromURL = "http://www.ncbi.nlm.nih.gov/entrez/query.fcgi"; QUrl url(sGetFromURL); url.addQueryItem("db", "nucleotide"); url.addQueryItem("cmd", "search"); url.addQueryItem("term", "AB100362"); QByteArray InfoNCBI; int errorCode = downloadURL(url, InfoNCBI); if (errorCode != 0 ) { QMessageBox::about(0,tr("Internet Error "), tr("Internet Error %1: Failed to connect to NCBI.\t\nPlease check your internect connection.").arg(errorCode)); return "ERROR"; } }

    Read the article

  • Using a Scala symbol literal results in NoSuchMethod

    - by Benson
    I have recently begun using Scala. I've written a DSL in it which can be used to describe a processing pipeline in medici. In my DSL, I've made use of symbols to signify an anchor, which can be used to put a fork (or a tee, if you prefer) in the pipeline. Here's a small sample program that runs correctly: object Test extends PipelineBuilder { connector("TCP") / Map("tcpProtocol" -> new DirectProtocol()) "tcp://localhost:4858" --> "ByteToStringProcessor" --> Symbol("hello") "stdio://in?promptMessage=enter name:%20" --> Symbol("hello") Symbol("hello") --> "SayHello" / Map("prefix" -> "\n\t") --> "stdio://out" } For some reason, when I use a symbol literal in my program, I get a NoSuchMethod exception at runtime: java.lang.NoSuchMethodError: scala.Symbol.intern()Lscala/Symbol; at gov.pnnl.mif.scaladsl.Test$.<init>(Test.scala:7) at gov.pnnl.mif.scaladsl.Test$.<clinit>(Test.scala) at gov.pnnl.mif.scaladsl.Test.main(Test.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at scala.tools.nsc.ObjectRunner$$anonfun$run$1.apply(ObjectRunner.scala:75) at scala.tools.nsc.ObjectRunner$.withContextClassLoader(ObjectRunner.scala:49) at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:74) at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:154) at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala) This happens regardless of how the symbol is used. Specifically, I've tried using the symbol in the pipeline, and in a simple println('foo) statement. The question: What could possibly cause a symbol literal's mere existence to cause a NoSuchMethodError? In my DSL I am using an implicit function which converts symbols to instances of the Anchor class, like so: implicit def makeAnchor(a: Symbol):Anchor = anchor(a) Sadly, my understanding of Scala is weak enough that I can't think of why that might be causing my NoSuchMethodError.

    Read the article

  • jQuery AJAX with two domains

    - by Andrew Burns
    OK here is the situation: I have an externally hosted CMS which works great for 99% of our needs. However on the more advanced things I inject my own CSS+JS and do magic. The problem I am running into is loading a simple HTML page from jQuery.ajax() calls. It appears to work in the sense that no warnings or errors are thrown; however in my success handler (which IS ran), the response is blank! I have been scratching my head for the whole morning trying to figure this out and the only thing I can think of is that is has something to do with the cross domain issue (even though it appears to work). Injected JavaScript: $(document).ready(function() { doui(); }); function doui() { $.ajax({ url: 'http://apps.natronacounty-wy.gov/css/feecalc/ui.htm', cache: false, success: ajax_createUI, charset: "utf-8", error: function(e) { alert(e); } }); } function ajax_createUI(data, textStatus) { alert(data); $("#ajax-content").html(data); } My ajax_createUI() success handler is called and textStatus is "success"; however data is empty. This JS file resides @ http://apps.natronacounty-wy.gov/css/js/feecalc.js however the CMS website (which gets the JS injected into it) resides @ http://www.natronacounty-wy.gov/ Am I just being stupid or is it a bug that it looks like it should be working but isn't?

    Read the article

  • How to seek to a specific time in a RTP stream?

    - by Cipi
    I am streaming a prerecorded H264 video that has the following structure: [I] [x] [x] [x] [I] [x] [x] [x] [I]... In between the IDR (I-s in my structure) I have 32 (only 3 presented here) other frames (all other stuff that is not IDR like SEI, SPS, PPS... X-es) Now, let assume that the timing of my frames is such: TIME: 1 2 3 4 5 6 7 8 9 FRAME: [I] [x] [x] [x] [I] [x] [x] [x] [I]... Now i want to seek to the time 4. If I seek to that frame, and send it, the picture gets messed up because the decoder needs a IDR to decode it properly, so I resorted to finding the appropriate IDR (in this case one with the time 1) and sending it as the frame with the time 4. So now the picture is decoded properly, all is well... but... If my GOV is 32, and I need to send the non IDR frame that has the index 31, and if the time span between it and the corresponding IDR is 3 seconds, I actually get 3 seconds earlier then the time I want. Now, this is not precise, because I cannot seek to the half of the GOV time span. Also, I cant set smaller GOV, so I want other ideas... My other idea was to send the last known IDR, and then send all other non IDR frames that come before the one I want, only I would set for all of them RTP-TIME to be the same as the corresponding IDR. In this case the picture gets decoded perfectly, but now in the above case, 3 seconds that follow non IDR frame with the wanted time get fast paced in the decoder/player (there is no instantaneous seek)... Any ideas? Or I can only seek to IDR-s and not the frames in between?

    Read the article

  • mod rewrite to remove facebook query string

    - by user905752
    I can't get my query string to work..please help... I have the following url: http://betatest.bracknell-forest.gov.uk/help?fb_action_ids=372043216205703&fb_action_types=og.likes&fb_source=aggregation&fb_aggregation_id=288381481237582 (sorry but the page will be unavailable as it is a test internal domain link) I want the following url: http://betatest.bracknell-forest.gov.uk/help I get a browser message saying 'The system cannot find the file specified.' I know it is because I already have a mod rewrite to remove the .htm from the page name to return clean urls but I don't know what I need to do to accept a clean url and return the page. Here is the mod rewrite code I have: RewriteRule ^/([\w]+)$ /$1.htm [I,L] #Any bare URL will get rewritten to a URL with .htm appended RedirectRule ^/(.+)\.(htm)$ http://betatest.bracknell-forest.gov.uk/$1 [R=301] RewriteCond %{QUERY_STRING} ^fb_action_ids=(.)$ #if the query string contains fb_action_ids RewriteCond %{QUERY_STRING} !="" #if there is a query string RewriteRule ^(.*) $1? [R=301,L] I think it is because I am using R=301 twice but do not know what I need to use as an alternative. If I append .htm from help?fb_action_ids.... to help.htm?fb_action_ids.... this returns the required page fine but I need to return the page name for the non appended url. Many thanks for any help in advance.

    Read the article

  • How can I use Qt to get html code of this NCBI page??

    - by user308503
    I'm trying to use Qt to download the html code from the following url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=nucleotide&cmd=search&term=AB100362 this url will re-direct to www.ncbi.nlm.nih.gov/nuccore/27884304 I try to do it by following way, but I cannot get anything. it works for some webpage such as www.google.com, but not for this NCBI page. is there any way to get this page?? QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; } void GetGi() { int pos; QString sGetFromURL = "http://www.ncbi.nlm.nih.gov/entrez/query.fcgi"; QUrl url(sGetFromURL); url.addQueryItem("db", "nucleotide"); url.addQueryItem("cmd", "search"); url.addQueryItem("term", "AB100362"); QByteArray InfoNCBI; int errorCode = downloadURL(url, InfoNCBI); if (errorCode != 0 ) { QMessageBox::about(0,tr("Internet Error "), tr("Internet Error %1: Failed to connect to NCBI.\t\nPlease check your internect connection.").arg(errorCode)); return "ERROR"; } }

    Read the article

  • Get other ldap query strings associated with a domain.

    - by seekerOfKnowledge
    I have in Softerra LDAP Administration something like the following: server: blah.gov OU=Domain Controllers etc... ldap://subdomain.blah.gov I can't figure out how to, in C#, get those other ldap subdomain query strings. I'm not sure how else to explain it, so ask questions and I'll try to clarify. Updated: This is what Softerra LDAP Administrator looks like. The ldap queries near the bottom are not children of the above node, but somehow, the program knows about them and linked them in the GUI. If I could figure out how, that would fix my problem.

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • JSON Feed Appears to be XHR when it should be JS

    - by Oscar Godson
    I don't get why it'd doing this with the 2nd feed (appearing as a XHR call rather than just JS [looking at it in Firefox/Firebug]). The 2nd feed has the exact same MIME type as Flickr's JSON feed, yet the PortlandOregon.gov one shows as XHR and i get a NULL callback when using $.getJSON and if i use $.ajax with a 'json' or 'jsonp' type i get nothing at all. If i do the Flickr one i get the normal "[object Object]" callback. Whats going on? Please help! This has been such a headache for about a week. And i have authorization to change the feed, but i have to request the change, so if anyone knows for absolute sure let me know that! Response Headers from Flickr's API ( http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=? ) [JS]: Date Mon, 15 Mar 2010 21:56:06 GMT P3P policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Mon, 15 Mar 2010 21:52:17 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Vary Accept-Encoding Content-Encoding gzip Content-Length 3647 Connection close Content-Type application/x-javascript; charset=utf-8 Request Headers Host api.flickr.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Cookie BX=4lflj455amesp&b=3&s=iv; fltoto=0%2C0%2C0%2C0%2C1%2C0%3B0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%3B1%3B0%3B; search_z=t; localization=en-us%3Bus%3Bus PortlandOregon.gov ( http://www.portlandonline.com/shared/cfm/json.cfm?c=27321 ) [XHR]: Response Headers Connection close Date Mon, 15 Mar 2010 21:57:49 GMT Server Microsoft-IIS/6.0 Set-Cookie CONTACT_ID=0;path=/ LAST_USER=;path=/ BIGipServercgis_pol_web_pool-http=1191537418.20480.0000; path=/ Content-Type application/x-javascript; charset=utf-8 Request Headers Host www.portlandonline.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Origin http://oscargodson.com

    Read the article

  • Can I improve this regex check for valid domain names?

    - by Josh
    So, I have been working on this domain name regular expression. So far, it seems to pick up domain names with SLDs and TLDs (with the optional ccTLD), but there is duplication of the TLD listing. Can this be refactored any further? params[:domain_name].downcase.strip.match(/^[a-z0-9\-]{2,63} \.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| (m[acdghklmnopqrstuvwxyz]|me|mil|mobi|museum)|(n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]) (\.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| m[acdghklmnopqrstuvwxyz]|mil|mobi|museum)| (n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]))?$/)

    Read the article

  • Total Solar Eclipse 13/November/2012

    - by TATWORTH
    On the 13/November/2012 there will be a total Solar Eclipse. The only land from which this will be visible is the northern part of Australia. Details of the Eclipse are at http://eclipse.gsfc.nasa.gov/SEmono/TSE2012/TSE2012.htmlCairns WEBCAM http://www.eclipsecairns.com/Wikipedia Entry http://en.wikipedia.org/wiki/Solar_eclipse_of_November_13,_2012Panasonic see  https://www.facebook.com/PanasonicEclipseLive?ref=stream?Main location channel from Sheraton Mirage Port Douglas Resort. http://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-1 ?Second location channel from Fitzroy Islandhttp://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-2

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • PlanetQuest solution update

    - by TATWORTH
    The planet quest solution has been updated post VS2010 SP1 installation to have a simple WinForm to display the latest extra-solar planet reported at http://planetquest.jpl.nasa.gov/ The WIN form will display the latest planet reported on the NASA site. The application can be minimised. When it picks up a new planet it return to the normal window state and display the details of the newly discovered planet. The source is available at http://planetquest.codeplex.com/

    Read the article

  • La ligne de code est-elle la meilleure unité de mesure d'un projet informatique ? Un diagramme les classe suivant l'importance de leur code source

    La ligne de code est elle la meilleure unité de mesure d'un projet informatique ? Un diagramme les classe suivant l'importance de leur code source L'infographiste David McCandless a publié un diagramme classant les logiciels informatiques et les sites web suivant l'importance de leur code source, du plus petit nombre de lignes de code (un simple jeu pour iPhone) au plus important (le site web healthcare.gov).Pour réaliser ce travail, il s'est inspiré de diverses sources parmi lesquelles la...

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • What Are Authority Link Tools?

    In order to establish what an authority link tool is you first have to define what an authority link is. An authority link is a backlink from a website that has standing in the search engines (these are usually .edu or .gov domains).

    Read the article

  • AGLS Metadata - Is it widely adopted?

    - by Brandrally
    Recently, I have seen in a couple sites around Australia's meta data AGLS tags. <meta name="AGLS.Audience" scheme="agls-audience" content="All"/> <meta name="DC.Publisher" scheme="AglsAgent" content="Hyundai"/> I have never seen this kind of mark-up before and discovered: http://www.agls.gov.au/ Just wondering whether there is a big community / support out there for the adopting these tags? Any thoughts would be great.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >