Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 175/1167 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

  • "Options ExecCGI is off in this directory" When try to run Ruby code using mod_ruby

    - by Itay Moav
    I am on Ubuntu, Apache 2.2 Installed the fcgi via apt-get then removed it via apt-get remove. Installed mod-ruby configuration I added to Apache: LoadModule ruby_module /usr/lib/apache2/modules/mod_ruby.so RubyRequire apache/ruby-run <Directory /var/www> Options +ExecCGI </Directory> <Files *.rb> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> <Files *.rbx> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> I have a file in the www direcoty with puts 'baba' I have other files in that directory, all accessible via Apache. Test file has been chmod 777 In the browser I get 403. In Apache error log I get: [error] access to /var/www/t.rb failed for (null), reason: Options ExecCGI is off in this directory If I move this to a sub folder rubytest and modify the relevant config to be: <Directory /var/www/rubytest> Options +ExecCGI </Directory> and making sure the directory has 755 permissions on it, it just try to download the file, as if it does not recognize the postfix *.rb any more If I give directory and files 777 it fails: usr/lib/ruby/1.8/apache/ruby-run.rb:53: warning: Insecure world writable dir /var/www/rubytest in LOAD_PATH, mode 040777 [Tue May 24 19:39:58 2011] [error] mod_ruby: error in ruby [Tue May 24 19:39:58 2011] [error] mod_ruby: /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in load': loading from unsafe file /var/www/rubytest/t.rb (SecurityError) [Tue May 24 19:39:58 2011] [error] mod_ruby: from /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in handler' BUT, IF I USE *.rbx it works like a charm...go figure.

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Search XDocument with LINQ with out knowing the Namespace

    - by BarDev
    Is there a way to search a XDocument without knowing the Namespace. I have a process that logs all soap requests and encrypts the sensitive data. I want to find any elements based on name. Something like, give me all elements where the name is CreditCard. I don't care what the namespace is. My problem seems to be with LINQ and requiring a xml namespace. I have other processes that retrieve values from XML, but I know the namespace for these other process. XDocument xDocument = XDocument.Load(@"C:\temp\Packet.xml"); XNamespace xNamespace = "http://CompanyName.AppName.Service.Contracts"; var elements = xDocument.Root.DescendantsAndSelf().Elements().Where(d = d.Name == xNamespace + "CreditCardNumber"); But what I really want, is to have the ability to search xml without knowing about namespaces, something like this: XDocument xDocument = XDocument.Load(@"C:\temp\Packet.xml"); var elements = xDocument.Root.DescendantsAndSelf().Elements().Where(d = d.Name == "CreditCardNumber") But of course this will not work be cause I do no have a namespace. BarDev

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • PHP simplexml and xpath

    - by FFish
    I have an XML file like this: <?xml version="1.0" encoding="UTF-8"?> <gallery> <album tnPath="tn/" lgPath="imm/" fsPath="iml/" > <img src="001.jpg" /> <img src="002.jpg" /> </album> </gallery> I am reading the file with: $xmlFile = "xml.xml"; $xmlStr = file_get_contents($xmlFile . "?" . time()); $xmlObj = simplexml_load_string($xmlStr); Now I am rebuilding the XML file and would like to save the album node with it's attributes in a var I was thinking with xpath: // these all return arrays with the images... // echo $xmlObj->xpath('/gallery/album@tnPath'); // echo $xmlObj->xpath('//album[@tnPath]'); // echo $xmlObj->xpath('//@tnPath'); But that doesn't seem to work? Any help?

    Read the article

  • connect to web API from ..NET

    - by Saif Khan
    How can I access and consume a web API from .NET? The API is not a .NET API. Here is sample code I have in Ruby require 'uri' require 'net/http' url = URI.parse("http://account.codebasehq.com/widgets/tickets") req = Net::HTTP::Post.new(url.path) req.basic_auth('dave', '6b2579a03c2e8825a5fd0a9b4390d15571f3674d') req.add_field('Content-type', 'application/xml') req.add_field('Accept', 'application/xml') xml = "<ticket><summary>My Example Ticket</summary><status-id>1234</status-id><priority-id>1234</priority-id><ticket-type>bug</ticket-type></ticket>" res = Net::HTTP.new(url.host, url.port).start {|http| http.request(req, xml)} case res when Net::HTTPCreated puts "Record was created successfully." else puts "An error occurred while adding this record" end Where can I find information on consuming API like this from .NET? I am aware how to use .NET webservices.

    Read the article

  • AS3: doesn't read Atom

    - by Vinzcent
    Hey, I want to read an Atom in Flex. I can see in the debugger that he can read the Atom and that there are entries, I can see each value. So far, so good. But when I want to assign a value from the atom to a variable, he never gives any text. It's always this: "". My code: ch.Name = xml.title; ch.Desc = xml.subtitle; ch.Updated = xml.updated; for each(var entry:XML in xml.entry) { var fee:Feed = new Feed(); fee.Name = entry.title; fee.Url = entry.link.@href; fee.Desc = entry.summary; fee.Updated = entry.updated; fee.Published = entry.published; ch.Children.addItem(fee); } For example this is the value ch.Name gets ch.Name = ""; But that's strange, because I can see in the debugger that it schould be "Tweakers.net". Thanks a lot, Vincent Sorry for my bad English.

    Read the article

  • Apache start failing after apache config modifications, showing syntax error, cannot load php5apache2_2.dll into server

    - by Sandeepan Nath
    I am stuck again with apache setup guys. I am working on a Windows 7 system. I copied the working php5 installation directory from teammates, copied the necessary .dll files from inside php5 installation folder (like they were in the working setup of teammates) to my windows/system32/. Apache server started successfully with the default apache config file. I was able to access localhost in browser. But php code was not parsing. I noticed no such line like the following in the apache config file:- # PHP5 module LoadModule php5_module D:/php5/php5apache2_2.dll If I add this line, apache server start fails. Running test configuration gives the following error - httpd.exe: Syntax error on line 60 of C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/httpd.conf: Cannot load D:/php5/php5apache2_2.dll into server: The specified procedure could not be found. But the dll file is there in the specified location and I have given all permissions to the current system user to the php5 installation directory. The same line also appears in the apache error log, though I am not sure when exactly logs are written to the log file. I am confused if log entries are not made if I have opened the log file for reading? lol ... because I could not observe a pattern in when entries are made. I saw some log entries being made, some not. Oh, why is apache setup such a headache always????

    Read the article

  • Using sed for introducing newline after each > in a +1 gigabyte large one-line text file

    - by wasatz
    I have a giant text file (about 1,5 gigabyte) with xml data in it. All text in the file is on a single line, and attempting to open it in any text editor (even the ones mentioned in this thread: http://stackoverflow.com/questions/159521/text-editor-to-open-big-giant-huge-large-text-files ) either fails horribly or is totally unusable due to the text editor hanging when attempting to scroll. I was hoping to introduce newlines into the file by using the following sed command sed 's/>/>\n/g' data.xml > data_with_newlines.xml Sadly, this caused sed to give me a segmentation fault. From what I understand, sed reads the file line-by-line which would in this case mean that it attempts to read the entire 1,5 gig file in one line which would most certainly explain the segfault. However, the problem remains. How do I introduce newlines after each in the xml file? Do I have to resort to writing a small program to do this for me by reading the file character-by-character?

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Explanation of JAXB error: Invalid byte 1 of 1-byte UTF-8 sequence

    - by Marcus
    We're parsing an XML document using JAXB and get this error: [org.xml.sax.SAXParseException: Invalid byte 1 of 1-byte UTF-8 sequence.] at javax.xml.bind.helpers.AbstractUnmarshallerImpl.createUnmarshalException(AbstractUnmarshallerImpl.java:315) What exactly does this mean and how can we resolve this?? We are executing the code as: jaxbContext = JAXBContext.newInstance(Results.class); Unmarshaller unmarshaller = jaxbContext.createUnmarshaller(); unmarshaller.setSchema(getSchema()); results = (Results) unmarshaller.unmarshal(new FileInputStream(inputFile)); Update Issue appears to be due to this "funny" character in the XML file: ¿ Why would this cause such a problem??

    Read the article

  • Accessing children with a given name via jdom

    - by Christian
    I want to access the children with skos:Concept. getChildren("skos:Concept") and getChildren("Concept") both give me an empty list what should I use instead?. My example Data: <owl:AnnotationProperty rdf:about="&dc;identifier"/> <owl:ObjectProperty rdf:about="&skos;narrower"/> <skos:Concept rdf:about="#concept:0_acetylpantolactone:4253501"> <skos:prefLabel xml:lang="" >0-acetylpantolactone</skos:prefLabel> <skos:hiddenLabel xml:lang="" >2(3H)-Furanone, 3-(acetyloxy)dihydro-4,4-dimethyl-, (R)-</skos:hiddenLabel> <dc:identifier rdf:resource="urn:CHID:028227363"/> <dc:identifier rdf:resource="urn:MESH:C014305"/> </skos:Concept> <skos:Concept rdf:about="#concept:1012S:4202655"> <skos:prefLabel xml:lang="">1012S</skos:prefLabel> <skos:hiddenLabel xml:lang="" >C19-H16-Cl2-N6-O</skos:hiddenLabel> <skos:hiddenLabel xml:lang="">Compound 1012S</skos:hiddenLabel> <dc:identifier rdf:resource="urn:CAS:95211_91_9"/> <dc:identifier rdf:resource="urn:CHID:095211919"/> </skos:Concept>

    Read the article

  • How to configure router to give XBox top priority / most bandwidth?

    - by MrSparky
    Hi all - Newbie here so go easy... and apologies in advance if I blow the community etiquette / rules! Here's what I'm trying to do: I have a "D-Link DIR-655 Extreme N" wireless router and an XBox 360 w/ the old-style wireless connection thing (usb attachment)... I want to configure my router / network to give my XBox as much bandwidth as it wants, whenever it wants. I have tried giving the XBox a unique IP (within my network) and then tweaking the router to treat that IP as a top priority application (using the router's QOS stuff). Problem is whenever I turn off the xbox, I can't connect to the network the next time I start it up. It seems the only reliable setting in the XBox is to use "Automatic" for the IP settings within the Network Configuration area. Supposedly the D-Link ships with default settings that attempt to recognize a game console and give it top priority... but i've not seen good results (lots of stuttering / lag when someone else jumps online, etc). Any suggestions? thanks again!

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • How to disable auto-generated WCF configuration

    - by user351025
    Every time my program runs vs adds the default configuration to my app.config file. At that run it works fine, but at the next run it actually tries to read the config. The problem is that the default configuration has errors, it adds the attribute "Address", but attritbutes are not allowed to have capitals so it throws an exception. This means I have to remove the bad section every run! I've tried to configure the .config but it gives errors. Here is the code that I use to host the server: private static System.Threading.AutoResetEvent stopFlag = new System.Threading.AutoResetEvent(false); ServiceHost host = new ServiceHost(typeof(Service), new Uri("http://localhost:8000")); host.AddServiceEndpoint(typeof(IService), new BasicHttpBinding(), "ChessServer"); host.Open(); stopFlag.WaitOne(); host.Close(); Here is the client code that calls the server: ChannelFactory<IChessServer> scf; scf = new ChannelFactory<IService> (new BasicHttpBinding(), "http://localhost:8000"); IService service = scf.CreateChannel(); Thanks for any help.

    Read the article

  • How to avoid serializing zero values with Simple Xml

    - by Bram Vandenbussche
    I'm trying to serialise an object using simple xml (http://simple.sourceforge.net/). The object setup is pretty simple: @Root(name = "order_history") public class OrderHistory { @Element(name = "id", required = false) public int ID; @Element(name = "id_order_state") public int StateID; @Element(name = "id_order") public int OrderID; } The problem is when I create a new instance of this class without an ID: OrderHistory newhistory = new OrderHistory(); newhistory.OrderID = _orderid; newhistory.StateID = _stateid; and I serialize it via simple xml: StringWriter xml = new StringWriter(); Serializer serializer = new Persister(); serializer.write(newhistory, xml); it still reads 0 in the resulting xml: <?xml version='1.0' encoding='UTF-8'?> <order_history> <id>0</id> <id_order>2</id_order> <id_order_state>8</id_order_state> </order_history> I'm guessing the reason for this is that the ID property is not null, since integers can't be null. But I really need to get rid of this node, and I'd rather not remove it manually. Any clues anyone?

    Read the article

  • Recursive XSLT, part 2

    - by Eric
    Ok, following on from my question here. Lets say my pages are now like this: A.xml: <page> <header>Page A</header> <content-a>Random content for page A</content-a> <content-b>More of page A's content</content-b> <content-c>More of page A's content</content-c> <!-- This doesn't keep going: there are a predefined number of sections --> </page> B.xml: <page include="A.xml"> <header>Page B</header> <content-a>Random content for page B</content-a> <content-b>More of page B's content</content-b> <content-c>More of page B's content</content-c> </page> C.xml: <page include="B.xml"> <header>Page C</header> <content-a>Random content for page C</content-a> <content-b>More of page C's content</content-b> <content-c>More of page C's content</content-c> </page> After the transform (on C.xml), I'd like to end up with this: <h1>Page C</h1> <div> <p>Random content for page C</p> <p>Random content for page B</p> <p>Random content for page A</p> </div> <div> <p>More of page C's content</p> <p>More of page B's content</p> <p>More of page A's content</p> </div> <div> <p>Yet more of page C's content</p> <p>Yet more of page B's content</p> <p>Yet more of page A's content</p> </div> I know that I can use document(@include) to include another document. However, the recursion is a bit beyond me. How would I go about writing such a transform?

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • SQL Server XML Schemas

    - by Dave Ballantyne
    Ever been curious about the schema of , say an SSRS rdl file ?  How about the execution plan ? Not only should you already have the .XSD files , check out the folder ‘Microsoft SQL Server\100\Tools\Binn\schemas\sqlserver’ , but they are also available online here. 

    Read the article

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >