Search Results

Search found 5997 results on 240 pages for 'michael “doc” norton'.

Page 57/240 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Word 2010 does not save as Word 2003 XML

    - by Peter
    I have a document which was created in Word 2010, but for use in a particular application, it needs to be saved in Word 2003 XML format. When I try the normal "Save as" via the File menu (choosing Word 2003 XML format to save as), Word 2010 thinks for a while, and then presents the "Save as" dialog to me again, suggesting that I save the document as .docx. Trying to get around this, I saved the document as .doc (i.e. Word 97-2003 document). This worked fine. But when I try to save this .doc file as Word 2003 XML, again Word 2010 thinks for a while, and then presents the "Save as" dialog, suggesting this time that I save the document as .doc. Oh, and I need to say that this only happens on a specific document - all others work fine. I know I should try a process of elimination and see what is causing the symptoms, but it would nice to have an answer "in principle". Is there perhaps a setting somewhere that I have enable? Does anyone know what's going on here?

    Read the article

  • Lighttpd + django on gentoo 10 seconds to answer

    - by plaetzchen
    I want to run a Django site on a lighttpd with fastcgi on a gentoo machine. Everytime I try to access the site I get a response after more or less exactly 10 seconds. Im using a socket to let lighttpd communicate with my Django site, but a tcp port doesn't help either. Could this be a lighttpd problem? I tried to both from a server in the internet as well as from localost, this is what lighttpd gives me in the error.log 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : / 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : / 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.349) -- sanatising URI 2012-07-10 14:36:36: (response.c.350) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (mod_access.c.135) -- mod_access_uri_handler called 2012-07-10 14:36:36: (mod_fastcgi.c.3632) handling it in mod_fastcgi 2012-07-10 14:36:36: (response.c.470) -- before doc_root 2012-07-10 14:36:36: (response.c.471) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.472) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.473) Path : 2012-07-10 14:36:36: (response.c.521) -- after doc_root 2012-07-10 14:36:36: (response.c.522) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.523) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.524) Path : /var/www/owntube/owntube.fcgi 2012-07-10 14:36:36: (response.c.541) -- logical -> physical 2012-07-10 14:36:36: (response.c.542) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.543) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.544) Path : /var/www/owntube/owntube.fcgi

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Email Mail Merge via linked Excel sheet

    - by Joe Perrin
    I have a MS Word 2007 document setup as a Mail Merge doc. I am using Excel as the data source. The MERGEFIELD ClientData contains an Excel file (test.xlsx). I want to merge the data from the Excel file listed in ClientData into the respective Mail Merge document. However, whenever I start the Mail Merge the {MERGEFIELD ClientData} field gets resolved only once and does not select the next row from ClientData. So this: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\{MERGEFIELD ClientData}" \a \f 4 \h} Becomes this after starting the merge: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test.xlsx" \a \f 4 \h} So every Mail Merge doc uses the test.xlsx instead of the respective Excel document specific to the client (i.e test1.xlsx, test2.xlsx, test3.xlsx, etc.) As the merge runs through each Mail Merge doc I expect to see this: {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test1.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test2.xlsx" \a \f 4 \h} {LINK Excel.Sheet.12 "C:\\path\\to\\file\\test3.xlsx" \a \f 4 \h} But for some reason this isn't happening. Does anyone have any suggestions? Thanks!

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • Virtual host doesn't read .htaccess

    - by Charlie
    I just created virtual host: <VirtualHost myvirtualhost:80> ServerAdmin webmaster@myvirtualhost ServerName myvirtualhost DocumentRoot /home/myname/sites/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/myname/sites/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> It works, but it cant read .htacces file in public_html: DirectoryIndex otherindex.php I tried change all AllowOverride to All, but I get 500 error. How can I fix this ? thanks.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Planning an Event&ndash;SPS NYC

    - by MOSSLover
    I bet some of you were wondering why I am not going to any events for the most part in June and July (aside from volunteering at SPS Chicago).  Well I basically have no life for the next 2 months.  We are approaching the 11th hour of SharePoint Saturday New York City.  The event is slated to have 350 to 400 attendees.  This is second year in a row I’ve helped run it with Jason Gallicchio.  It’s amazingly crazy how much effort this event requires versus Kansas City.  It’s literally 2x the volume of attendees, speakers, and sponsors plus don’t even get me started about volunteers.  So here is a bit of the break down… We have 30 volunteers+ that Tasha Scott from the Hampton Roads Area will be managing the day of the event to do things like timing the speakers, handing out food, making sure people don’t walk into the event that did not sign up until we get a count for fire code, registering people, watching the sharpees, watching the prizes, making sure attendees get to the right place,  opening and closing the partition in the big room, moving chairs, moving furniture, etc…Then there is Jason, Greg, and I who will be making sure that the speakers, sponsors, and everything is going smoothly in the background.  We need to make sure that everything is setup properly and in the right spot.  We also need to make sure signs are printed, schedules are created, bags are stuffed with sponsor material.  Plus we need to send out emails to sponsors reminding them to send us the right information to post on the site for charity sessions, send us boxes with material to stuff bags, and we need to make sure that Michael Lotter gets there information for invoicing.  We also need to check that Michael has invoiced everyone and who has paid, because we can’t order anything for the event without money.  Once people have paid we need to setup food orders, speaker and volunteer dinners, buy prizes, buy bags, buy speakers/volunteer/organizer shirts, etc…During this process we need all the abstracts from speakers, all the bios, pictures, shirt sizes, and other items so we can create schedules and order items.  We also need to keep track of who is attending the dinner the night before for volunteers and speakers and make sure we don’t hit capacity.  Then there is attendee tracking and making sure that we don’t hit too many attendees.  We need to make sure that attendees know where to go and what to do.  We have to make all kinds of random supply lists during this process and keep on track with a variety of lists and emails plus conference calls.  All in all it’s a lot of work and I am trying to keep track of it all the top so that we don’t duplicate anything or miss anything.  So basically all in all if you don’t see me around for another month don’t worry I do exist. Right now if you look at what I’m doing I am traveling every Monday morning and Thursday night back and forth to Washington DC from New Jersey.  Every night I am working on organizational stuff for SharePoint Saturday New York City.  Every Tuesday night we are running an event conference call.  Every weekend I am either with family or my boyfriend and cat trying hard not to touch the event.  So all my time is pretty much work, event, and family/boyfriend.  I have 0 bandwidth for anything in the community.  If you compound that with my severe allergy problems in DC and a doctor’s appointment every month plus a new med once a week I’m lucky I am still standing and walking.  So basically once July 30th hits hopefully Jason Gallicchio, Greg Hurlman, and myself will be able to breathe a little easier.  If I forget to do this thank you Greg and Jason.  Thank you Tom Daly.  Thank you Michael Lotter.  Thank you Tasha Scott.  Thank you Kevin Griffin.  Thank you all the volunteers, speakers, sponsors, and attendees who will and have made this event a success.  Hopefully, we have enough time until next year to regroup, recharge, and make the event grow bigger in a different venue.  Awesome job everyone we sole out within 3 days of registration and we still have several weeks to go.  Right now the waitlist is at 49 people with room to grow.  If you attend the event thank all these guys I mentioned above for making it possible.  It’s going to be awesome I know it but I probably won’t remember half of it due to the blur of things that we will all be taking care of the day of the event.  Catch you all in the end of July/Early August where I will attempt to post something useful and clever and possibly while wearing a fez. Technorati Tags: SPS NYC,SharePoint Saturday,SharePoint Saturday New York City

    Read the article

  • How to get attribute value using SelectSingleNode?

    - by Nano HE
    I am parsing a xml document, I need find out the gid (an attribute) value (3810). Based on SelectSingleNode(). I found it is not easy to find the attribute name and it's value. Can I use this method or I must switch to other way. Attached my code. How can I use book obj to get the attribute value3810 for gid. Thank you. My test.xml file as below <?xml version="1.0" ?> <root> <VersionInfo date="2007-11-28" version="1.0.0.2" /> <Attributes> <AttrDir name="EFEM" DirID="1"> <AttrDir name="Aligner" DirID="2"> <AttrDir name="SequenceID" DirID="3"> <AttrObj text="Slot01" gid="3810" unit="" scale="1" /> <AttrObjCount value="1" /> </AttrDir> </AttrDir> </AttrDir> </Attributes> </root> I wrote the test.cs as below public class Sample { public static void Main() { XmlDocument doc = new XmlDocument(); doc.Load("test.xml"); XmlNode book; XmlNode root = doc.DocumentElement; book = root.SelectSingleNode("Attributes[AttrDir[@name='EFEM']/AttrDir[@name='Aligner']/AttrDir[@name='SequenceID']/AttrObj[@text='Slot01']]"); Console.WriteLine("Display the modified XML document...."); doc.Save(Console.Out); } } [Update 06/10/2010] The xml file is a complex file. Included thousands of gids. But for each of Xpath, the gid is unique. I load the xml file to a TreeView control. this.treeView1.AfterSelect += new System.Windows.Forms.TreeViewEventHandler(this.treeView1_AfterSelect);. When treeView1_AfterSelect event occurred, the e.Node.FullPath will return as a String Value. I parse the string Value e.Node.FullPath. Then I got the member of XPath Above. Then I tried to find which gid item was selected. I need find the gid value as a return value indeed.

    Read the article

  • fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_43.lib'

    - by Poni
    Made a new project, added main.cpp and wrote the code at this URL: http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/example/echo/async_tcp_echo_server.cpp Also, added the appropriate include path. What's next?!?!! It seems like a darn mystery to build a boost code! Been digging on it for more than 10 hours. Can anyone give a straightforward answer on how to build the boost library from the code under windows, VC9? God bless you all.

    Read the article

  • Java: JPQL date function to add a time period to another date

    - by bguiz
    SELECT x FROM SomeClass WHERE x.dateAtt BETWEEN CURRENT_DATE AND (CURRENT_DATE + 1 MONTH) In the above JPQL statement, SomeClass has a memebr dateAttr, which is a java.util.Date and has a @Temporal(javax.persistence.TemporalType.DATE) annotation. I need a way to do the (CURRENT_DATE + 1 MONTH) bit - it is obviously wrong in its current state - but cannot find the doc with the date function for JPQL. Can anyone point me in the direction of a doc that documents JPQL date functions (and also how to do this particular query)?

    Read the article

  • Delphi - How do I split a string into an array of strings based on a delimiter?

    - by Ryan
    Hello all, I'm trying to find a Delphi function that will split an input string into an array of strings based on a delimiter. I've found a lot on Google, but all seem to have their own issues and I haven't been able to get any of them to work. I just need to split a string like: "word:doc,txt,docx" into an array based on ':'. The result would be ['word', 'doc,txt,docx']. Does anyone have a function that they know works? Thank you

    Read the article

  • amazon mws orders help simplify, flatten xml

    - by Scott Kramer
    XDocument doc = XDocument.Load( filename ); var ele = doc.Elements("AmazonEnvelope") //.Elements("Header") .Elements("Message") .Elements("OrderReport") .Elements("Item") .Select(element => new { AmazonOrderItemCode = (string)element.Element("AmazonOrderItemCode"), SKU = (string)element.Element("SKU"), Title = (string)element.Element("Title"), Quantity = (string)element.Element("Quantity"), }) //.Elements("ItemPrice") //.Elements("Component") //.Select(element => new //{ // Type = (string)element.Element("Type"), // Amount = (string)element.Element("Amount"), // }) .ToList(); foreach (var x in ele) { Console.WriteLine(x.ToString()); }

    Read the article

  • SSMS 2008 Add-In - Execute Query

    - by ca8msm
    I'm loading a sql script up to an SSMS 2008 add-in like so: ' create a new blank document ServiceCache.ScriptFactory.CreateNewBlankScript(Microsoft.SqlServer.Management.UI.VSIntegration.Editors.ScriptType.Sql) ' insert SQL statement to the blank document Dim doc As EnvDTE.TextDocument = CType(ServiceCache.ExtensibilityModel.Application.ActiveDocument.Object(Nothing), EnvDTE.TextDocument) doc.EndPoint.CreateEditPoint().Insert(_Output.ToString()) Is there a way to automatically execute the statement as well? Thanks, Mark

    Read the article

  • details on the following Natural Language Processing terms ?

    - by wefwgeweg
    Named Entity Extraction (extract ppl, cities, organizations) Content Tagging (extract topic tags by scanning doc) Structured Data Extraction Topic Categorization (taxonomy classification by scanning doc....bayesian ) Text extraction (HTML page cleaning) are there libraries that i can use to do any of the above functions of NLP ? dont really feel like forking out cash to AlchemyAPI

    Read the article

  • Dojo Table not Rendering in IE6

    - by Mike Carey
    I'm trying to use Dojo (1.3) checkBoxes to make columns appear/hide in a Dojo Grid that's displayed below the checkBoxes. I got that functionality to work fine, but I wanted to organize my checkBoxes a little better. So I tried putting them in a table. My dojo.addOnLoad function looks like this: dojo.addOnLoad(function(){ var checkBoxes = []; var container = dojo.byId('checkBoxContainer'); var table = dojo.doc.createElement("table"); var row1= dojo.doc.createElement("tr"); var row2= dojo.doc.createElement("tr"); var row3= dojo.doc.createElement("tr"); dojo.forEach(grid.layout.cells, function(cell, index){ //Add a new "td" element to one of the three rows }); dojo.place(addRow, table); dojo.place(removeRow, table); dojo.place(findReplaceRow, table); dojo.place(table, container); }); What's frustrating is: 1) Using the Dojo debugger I can see that the HTML is being properly generated for the table. 2) I can take that HTML and put just the table in an empty HTML file and it renders the checkBoxes in the table just fine. 3) The page renders correctly in Firefox, just not IE6. The HTML that is being generated looks like so: <div id="checkBoxContainer"> <table> <tr> <td> <div class="dijitReset dijitInline dijitCheckBox" role="presentation" widgetid="dijit_form_CheckBox_0" wairole="presentation"> <input class="dijitReset dijitCheckBoxInput" id="dijit_form_CheckBox_0" tabindex="0" type="checkbox" name="" dojoattachevent= "onmouseover:_onMouse,onmouseout:_onMouse,onclick:_onClick" dojoattachpoint="focusNode" unselectable="on" aria-pressed="false"/> </div> <label for="dijit_form_CheckBox_0"> Column 1 </label> </td> <td> <div class="dijitReset dijitInline dijitCheckBox" role="presentation" widgetid="dijit_form_CheckBox_1" wairole="presentation"> <input class="dijitReset dijitCheckBoxInput" id="dijit_form_CheckBox_1" tabindex="0" type="checkbox" name="" dojoattachevent= "onmouseover:_onMouse,onmouseout:_onMouse,onclick:_onClick" dojoattachpoint="focusNode" unselectable="on" aria-pressed="false"/> </div> </td> </tr> <tr> ... </tr> </table> </div> I would have posted to the official DOJO forums, but it says they're deprecated and they're using a mailing list now. They said if a mailing list doesn't work for you, use stackoverflos.com. So, here I am! Thanks for any insight you can provide.

    Read the article

  • How do I convert Word files to PDF programmatically?

    - by Shaul
    I have found several open-source/freeware programs that allow you to convert .doc files to .pdf files, but they're all of the application/printer driver variety, with no SDK attached. I have found several programs that do have an SDK allowing you to convert .doc files to .pdf files, but they're all of the proprietary type, $2,000 a license or thereabouts. Does anyone know of any clean, inexpensive (preferably free) programmatic solution to my problem, using C# or VB.NET? Thanks!

    Read the article

  • Firefox logs invalid URL?

    - by thanks for help
    I'm writing an extension for firefox. Using dom.location to keep track of visited search results pages, i'm getting this url http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e . If you click it, the google search results for "hi" should come up. You'll know that from the title bar - because the rest of the page won't load. This happens with any google search. Oddly enough, if you cut part of it off, so say, http://www.google.com/search?hl=en&source=hp&q=hi - it works! But Googling "hi" myself does give me a longish URL - http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=db658cc5049dc510 . I know for a fact that the first time that URL was visited, the page loaded, I did it myself. Can anyone make reason out of this? I just tried my experiment again, this time saving the original URL in the location bar. It turns out, dom.location.href is giving a different value. How is this happening? Original: http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e dom.location.href http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e window.addEventListener("load", function() { myExtension.init(); }, false); var myExtension = { init: function() { var appcontent = document.getElementById("appcontent"); // browser if(appcontent) appcontent.addEventListener("DOMContentLoaded", myExtension.onPageLoad, true); var messagepane = document.getElementById("messagepane"); // mail if(messagepane) messagepane.addEventListener("load", function () { myExtension.onPageLoad(); }, true); }, onPageLoad: function(aEvent) { var doc = aEvent.originalTarget; // doc is document that triggered "onload" event // do something with the loaded page. // doc.location is a Location object (see below for a link). // You can use it to make your code executed on certain pages only. var url = doc.location.href; if (url.match(/(?:p|q)(?:=)([^%]*)/)) {alert("MATCH" + url);resultsPages.push(url);} else {alert(url); } } This snippet comes directly from Mozilla with the matching and alerts my own. I apologize for not posting the code earlier.

    Read the article

  • Nokogiri and Special Characters

    - by Moe
    I'm using Nokogiri to grab the contents of the title tag on a webpage, but am having trouble with accented characters. What's the best way to deal with these? Here's what I'm doing: require 'open-uri' require 'nokogiri' doc = Nokogiri::HTML(open(link)) title = doc.at_css("title") At this point, the title looks like this: Rag\303\271 Instead of: Ragù How can I have nokogiri return the proper character (e.g. ù in this case)?

    Read the article

  • good/full Boot Spirit examples using version 2 syntax

    - by bpw1621
    Almost all of the examples I've gone and looked at so far from: http://boost-spirit.com/repository/applications/show_contents.php use the old syntax. I've read and re-read the actual documentation at http://www.boost.org/doc/libs/1_42_0/libs/spirit/doc/html/index.html and the examples therein. I know Joel is starting a compiler series on the blog http://boost-spirit.com/home/ but that hasn't gotten in full swing yet. Any other resources to see worked examples using some more sophisticated/involved aspects in the context of fully working applications?

    Read the article

  • WPF WebBrowser: How to set element click event?

    - by Ralph
    I've figured out how to make everything red as soon as the page is finished loading: private void webBrowser1_LoadCompleted(object sender, NavigationEventArgs e) { var doc = (IHTMLDocument2)webBrowser1.Document; foreach (IHTMLElement elem in doc.all) { elem.style.backgroundColor = "#ff0000"; } } Now what if I want to make the element only change color when it's clicked? I see that elem has an onclick property, but it's type is dynamic so I don't know what to do with it. The documentation is pretty useless.

    Read the article

  • .NET OCRing an Image

    - by Kirschstein
    I'm trying to use MODI to OCR a window's program. It works fine for screenshots I grab programmatically using win32 interop like this: public string SaveScreenShotToFile() { RECT rc; GetWindowRect(_hWnd, out rc); int width = rc.right - rc.left; int height = rc.bottom - rc.top; Bitmap bmp = new Bitmap(width, height); Graphics gfxBmp = Graphics.FromImage(bmp); IntPtr hdcBitmap = gfxBmp.GetHdc(); PrintWindow(_hWnd, hdcBitmap, 0); gfxBmp.ReleaseHdc(hdcBitmap); gfxBmp.Dispose(); string fileName = @"c:\temp\screenshots\" + Guid.NewGuid().ToString() + ".bmp"; bmp.Save(fileName); return fileName; } This image is then saved to a file and ran through MODI like this: private string GetTextFromImage(string fileName) { MODI.Document doc = new MODI.DocumentClass(); doc.Create(fileName); doc.OCR(MODI.MiLANGUAGES.miLANG_ENGLISH, true, true); MODI.Image img = (MODI.Image)doc.Images[0]; MODI.Layout layout = img.Layout; StringBuilder sb = new StringBuilder(); for (int i = 0; i < layout.Words.Count; i++) { MODI.Word word = (MODI.Word)layout.Words[i]; sb.Append(word.Text); sb.Append(" "); } if (sb.Length > 1) sb.Length--; return sb.ToString(); } This part works fine, however, I don't want to OCR the entire screenshot, just portions of it. I try cropping the image programmatically like this: private string SaveToCroppedImage(Bitmap original) { Bitmap result = original.Clone(new Rectangle(0, 0, 250, 250), original.PixelFormat); var fileName = "c:\\" + Guid.NewGuid().ToString() + ".bmp"; result.Save(fileName, original.RawFormat); return fileName; } and then OCRing this smaller image, however MODI throws an exception; 'OCR running error', the error code is -959967087. Why can MODI handle the original bitmap but not the smaller version taken from it?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >