Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 152/1477 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Issues while downloading document from Sharepoint using JAVA

    - by Deepak Singh Rawat
    I am trying to download a file from Sharepoint 2007 sp2 document library using GetItem method of the Copy webservice. I am facing the following issues : In the local instance ( Windows Vista ) I can save only 10.5 Kb of any file. The webservice is returning only 10.5 Kb of data for any file. On the production server, I am able to List the documents using some credentials but when I am trying to download a document using the same credentials I get a 401 : Unauthorized message. I can download the document using the Sharepoint website successfully.

    Read the article

  • How to update document object value in loop (Javascript)

    - by Graham
    Hi folks i am a beginer to Java script so appologies if this is a dumb question. I am trying to put the following in a for loop but I can't seem to get it to work. if (rowID==1) { document.form1.class_titleDis_1.value=class_title; } if (rowID==2) { document.form1.class_titleDis_2.value=class_title; } This is what I tried var i=1; for (i=1;i<=2;i++) { document.form1.(class_titleDis_)+i.value=class_title; } I would be really grateful for any suggestions. Graham

    Read the article

  • Saving a "project"-type document (containing sub-documents)

    - by andyvn22
    I'm trying to create a "project"-like document, in that it contains subdocuments in a specified directory. I'd like a brand new save of a document to set up that directory with appropriate subdirectories. I'd like a "Save As" to copy all those subdirectories and any files within them to the new location. But I'd like a "Save" to only update certain data files and (of course) not overwrite all the subdocuments! What's the "safe" way to do this? I tried keeping track of the file's location in my document, and checking to see if it was the same or different than the save location, but it feels messy, and I'm worried that Apple is doing something behind the scenes that will make this direct URL-to-URL comparison fail in some circumstances. Is there a standard way to do something like this?

    Read the article

  • Lowering document.domain

    - by Sergej Andrejev
    What am I doing wrong? According to specs lowering domain with javacript should be possible in IE8 and IE7 but my code only wors in FF and throws Argument Exception in IE <html xmlns="http://www.w3.org/1999/xhtml" > <body onload="alert(document.domain); try { document.domain = 'if.se' } catch(e) { alert(e); }; alert(document.domain);"> </body> </html>

    Read the article

  • .toggle(true) throw null in $(document).ready(function())

    - by James123
    I am toggling row siblings. I wrote .toggle(true) when document ready. see below picture. I think row sibling are not availble before this function calls. $(document).ready(function() { $('tr[@class^=RegText]').hide().children('td'); list_Visible_Ids = []; var idsString, idsArray; idsString = $('#myVisibleRows').val(); idsArray = idsString.split(','); $.each(idsArray, function() { if (this != "") { $(this).siblings('.RegText').toggle(true); list_Visible_Ids[this] = 1; } }); How to resolve this? why sliblings are not avaible in when document is ready?

    Read the article

  • How to run Javascript code before document is completely loaded (using jQuery)

    - by eliza sahoo
    Hi all, I am sahring atip with you all.Please add on to this discussion. JQuery helps faster page load than javascript. JQuery functions are fired when the related elements are loaded, instead of complete pageload. This is a common practice to call a javascript function when page is loaded like window.onload = function(){ alert("Mindfire") } or Inside of which is the code that we want to run right when the page is loaded. Problematically, however, the Javascript code isn't run until all images are finished downloading (this includes banner ads). The reason for using window.onload in the first place is due to the fact that the HTML 'document' isn't finished loading yet, when you first try to run your code. To circumvent both problems, jQuery has a simple statement that checks the document and waits until it's ready to be manipulated, known as the ready event $(document).ready(function() { // Your code here });

    Read the article

  • URL encoding for document.location.href

    - by Tyler
    Hello - I am building an iFrame and am using document.location.href - my exact code is: <script type="text/javascript"> document.write("<iframe src='http://www.facebook.com/plugins/like.php?href=" + document.location.href + "&layout=standard&show_faces=false&action=like&font=verdana&colorscheme=light' frameborder=0></iframe>"); </script> This is working great for all of my pages except one. I believe the issue with the one page is caused by a dash "-" being in the page name. My questions is - is there a way to encode my src differently so that the link works? The CORRECT URL I want it to pull is: []/products/Product%252dExample.html But what it IS pulling in is: []/products/Product-Example.html And this is causing the page to not work correctly. Thanks!

    Read the article

  • need to count the frequency of each terms inside a document

    - by Wai Loon II
    hi, i need to calculate the frequency of all the terms inside a document. How can i do that ? i do not ask for codes. I am just asking for guidance. Actually i am doing some similarity calculation between a document and query. I have calculated the term frequency for the query. But i do not know how to calculate the tern frequency for EACH words inside a document. Can anyone guide me ? Thank you for your attention.

    Read the article

  • Handover document for complete systems

    - by viraptor
    Hi, I need to create a handover document for a fairly large system consisting of all the stuff you'd expect from a telecom deployment: many servers, database clusters which copy some data between them in specific ways, tons of log files, both off-the-shelf and locally developed software, scripts, network configurations, local know-how, etc. It's really got as many sysadmin-typical elements, as development ones. The target of this document are in the first place sysadmins who take over the day-to-day operation tasks and some problem resolving, and in the second place people who want to learn about the system in general. Is there some place I can learn about how to write something like that? It could just as easily be a 10 page "what's where", as a 500 pages book about "all things telephony". Maybe it should be more than one document really. Please link some useful resources / books I could use for this task. PS: this is intended to be internal only, customer interactions etc. are out of scope here

    Read the article

  • Show file extensions in document libraries in Sharepoint 2010

    - by sebastian
    I've got a Sharepoint site collection with several sub sites each having their own document library. Now I want to add the file extensions to the document names in all those libraries. How should I do this? I've seen tips telling you to modify the onet.xml-file, but they never look like mine do, and furthermore I don't know for sure what happens to that onet.xml-file - does it change the existing libraries? Other tips tell me to use Sharepoint designer, that would mean I'll have to do it for every view in every library, wouldn't it? So I'd prefer doing it from code, where I feel more comfortable and where I can automate the process. So, all I want is to replace the "Name (linked to document with edit menu)" with the "Name (for use in forms)" but still keep the link and the edit menu. I've found I need to use the FileLeafRef-field, but I don't know how! Thankful for any new clues, Sebastian

    Read the article

  • document.forms.gallery_form.submit is not a function

    - by Keene Maverick
    I swear, I have this exact thing working on another page. I'm such a javascript noob it's embarrassing... function delete_gallery() { var gallery = document.getElementById('gallery_id').value; var form = document.getElementById('gallery_form'); form.setAttribute('action', 'index.php?action=delete&section=galleries&id='+gallery); document.forms['gallery_form'].submit(); } Inspecting the element shows that it's updating the action correctly : <form method="post" action="index.php?action=delete&amp;section=galleries&amp;id=12" name="gallery_form" id="gallery_form"><input type="hidden" value="12" id="gallery_id" name="gallery_id"><p>Name: <input type="text" name="name" value="Woo"></p><p>Description:<br><textarea name="description">Dee</textarea><input type="hidden" value="2" name="artist"></p><p><input type="submit" value="Submit" name="submit"> </p></form> Here's the button I use to call the function, it's in a table below the form: <button onclick="delete_gallery()" type="button">Delete Gallery</button>

    Read the article

  • jquery document.ready multiple declaration

    - by Hendry H.
    I realized that I can specify $(document).ready(function(){}); more than once. Suppose like this $(document).ready(function(){ var abc = "1122"; //do something.. }); $(document).ready(function(){ var abc = "def"; //do something.. }); Is this standard ? Those codes work on my FF (16.0.2). I just a little afraid that other browser may not. What actually happen ? How jQuery handle those code ? Thanks.

    Read the article

  • document.getElementById("tweetform").submit is not a function?

    - by Abs
    Hello all, I am using Firefox and I've been reading some forums claiming this doesn't work in firefox but it has been working for me for the last few days but has stopped working and I can not figure out why. I make a AXAX POST Request using an IFrame. When I get the response I use this: function startLoad(){ $.get("update.php", function(data){ if(data==''){ startLoad(); } else{ document.getElementById("tweetform").submit(); } }); } However, from firebug, I get this: document.getElementById("tweetform").submit is not a function [Break on this error] document.getElementById("tweetform").submit(); I know submit exists, but what is going on? Thanks all

    Read the article

  • Javascript - document.links.className ?

    - by Slevin
    I have a Javascript question that I am curious about. When using: var test = document.links(arrayNum).name; Is it possible to dig deeper into specifying links? For instance, if there is just a group of links from within a div with an id that you would like to target as opposed to all the links on the page could you write something like: var test = document.getElementById('divId').links(arrayNum).name; Or could I add a class to the statement to target only the links associated with it? var test = document.links.className(arrayNum).name; Is anything like this feasible?

    Read the article

  • In which document do file specifications belong?

    - by Andrew
    In which document would a file specification belong? Perhaps this file is used as an input to a third-party system. Would it belong in its own document? Or would it be better to put it in the functional or design spec? Or somewhere else? When I say file specification, I mean a description of what format the file is (CSV, fixed width, etc), columns, data types, etc. Also, where should you document how the file is generated? i.e. business rules/algorithms which are used to generate the file.

    Read the article

  • Google Document export via API

    - by micco
    After using Zend_GData to retrieve a document list feed, I can use the content URLs in the form: http://docs.google.com/document/edit?id=<docid>&hl=en but the source URLs in the form http://docs.google.com/feeds/download/documents/Export?docId=<docid>&exportFormat=html are returning 404 errors. That URL should return the content of the document in the requested format but it is returning 404. This problem is mentioned without resolution on a Google API forum. As indicated in that forum post, this problem only seems to affect new documents. My code works perfectly retrieving old documents, but new ones are 404. Has something changed in the way Google references new documents or in the way permissions are assigned? The code I'm using is essentially the same as the code on this page but this does not seem to be an issue specific to PHP/Zend_Gdata.

    Read the article

  • Difference between the Document classes

    - by takoi
    I've been reading the javadocs trying to grasp around the swing Document API but I cant get something sensible out of it because there's so many classes: Document, StyledDocument, AbstractDocument, DefaultStyledDocument, PlainDocument, HTMLDocument, and someone mentioned DocumentFilter. This question is more on a general basis so can someone give an overview of the differences between the implementations and when the different interfaces and abstracts are for? For my specific case what I want to achieve is a data structure that will: hold three lines of text only. And attributes must not be per line or document. I will have a couple of thousand of these in some other structure so overhead is important. Anything that i can use for this or is it better to extend something? If so, what?

    Read the article

  • Bidirectional real-time sync of large file tree between two distant linux servers

    - by dlo
    By large file tree I mean about 200k files, and growing all the time. A relatively small number of files are being changed in any given hour though. By bidirectional I mean that changes may occur on either server and need to be pushed to the other, so rsync doesn't seem appropriate. By distant I mean that the servers are both in data centers, but geographically remote from each other. Currently there are only 2 servers, but that may expand over time. By real-time, it's ok for there to be a little latency between syncing, but running a cron every 1-2 minutes doesn't seem right, since a very small fraction of files may change in any given hour, let alone minute. EDIT: This is running on VPS's so I might be limited on the kinds of kernel-level stuff I can do. Also, the VPS's are not resource-rich, so I'd shy away from solutions that require lots of ram (like Gluster?). What's the best / most "accepted" approach to get this done? This seems like it would be a common need, but I haven't been able to find a generally accepted approach yet, which was surprising. (I'm seeking the safety of the masses. :) I've come across lsyncd to trigger a sync at the filesystem change level. That seems clever though not super common, and I'm a bit confused by the various lsyncd approaches. There's just using lsyncd with rsync, but it seems this could be fragile for bidirectionality since rsync doesn't have a notion of memory (eg- to know whether a deleted file on A should be deleted on B or whether it's a new file on B that should be copied to A). lipsync appears to be just a lsyncd+rsync implementation, right? Then there's using lsyncd with csync2, like this: http://www.axivo.com/community/threads/lightning-fast-synchronization-with-csync2-and-lsyncd.121/ ... I'm leaning towards this approach, but csync2 is a little quirky, though I did do a successful test of it. I'm mostly concerned that I haven't been able to find a lot of community confirmation of this method. People on here seem to like Unison a lot, but it seems that it is no longer under active development and it's not clear that it has an automatic trigger like lsyncd. I've seen Gluster mentioned, but maybe overkill for what I need? UPDATE: fyi- I ended up going with the original solution I mentioned: lsyncd+csync2. It seems to work quite well, and I like the architectural approach of having the servers be very loosely joined, so that each server can operate indefinitely on its own regardless of the link quality between them.

    Read the article

  • .htaccess to deny access to most xml files

    - by CEich
    I recently had a Joomla site hacked, so I'm trying to harden the site a bit. There's a section in the recommended .htaccess that restricts outside access to the xml files that come with extensions. However, it also keeps my sitemap.xml file from being accessed. How do I allow a certain file whiles keeping the rest? here's the default code: <Files ~ "\.xml$"> Order allow,deny Deny from all Satisfy all </Files> and my modification that caused a 500 error: <Files ~ "(?!sitemap)\.xml$"> Order allow,deny Deny from all Satisfy all </Files>

    Read the article

  • How to extract terms from an HTML document

    - by bookcasey
    I have a HTML document filled with terms that I need to put into a spreadsheet. They follow this basic pattern: <ul> <li class="name"><a href="spot.html">Spot</a></li> <li class="type">Dog</li> <li class="color">Red</li> </ul> <ul> <li class="name"><a href="mittens.html">Mittens</a></li> <li class="type">Cat</li> <li class="color">Brown</li> </ul> <ul> <li class="name"><a href="squakers.html">Squakers</a></li> <li class="type">Little Parrot</li> <li class="color">Rainbow</li> </ul> It's very consistent. I need to extract the string within the li.name a (so, "Spot") but only if the type is "Dog" or "Parrot", and put them in a spreadsheet. I've been trying to use Sublime Text's ability to Find with regex, but I'm really struggling, and since regex and HTML usually don't play nice, I was wondering if there is a better and easier way to accomplish this. Thanks.

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >