Search Results

Search found 30270 results on 1211 pages for 'bart read'.

Page 357/1211 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Rails - strip xml import from whitespace and line break

    - by val_to_many
    Hey folks, I am stuck with something quite simple but really annoying: I have an xml file with one node, where the content includes line breaks and whitspaces. Sadly I can't change the xml. <?xml version="1.0" encoding="utf-8" ?> <ProductFeed> ACME Ltd. Fooproduct Foo Root :: Bar Category I get to the node and can read from it without trouble: url = "http://feeds.somefeed/feed.xml.gz" @source = open((url), :http_basic_authentication=>["USER", "PW"]) @gz = Zlib::GzipReader.new(@source) @result = @gz.read @doc = Nokogiri::XML(@result) @doc.xpath("/ProductFeed/Vendors/Vendor").each do |manuf| vendor = manuf.css("Name").first.text manuf.xpath("//child::Product").each do |product| product_name = product.css("Name").text foocat = product.css("Category").text puts "#{vendor} ---- #{product_name} ---- #{foocat} " end end This results in: ACME Ltd. ---- Fooproduct ---- Foo Root :: Bar Category Obviously there are line breaks and tab stops or spaces in the string returned by product.css("Category").text. Does anyone know how to strip the result from linebreaks and taps or spaces right here? Alternatively I could do that in the next step, where I do a find on 'foocat' like barcat = Category.find_by_foocat(foocat) Thanks for helping! Val

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Connecting to database on web host in Visual Studio

    - by Anders Svensson
    I have a web site developed locally with a local Sql Server database. I also have a web host that provides one Sql Server database for my site. Now I want to deploy the application, and I would like to be able to manage the remote database from the Server Explorer in Visual Studio. I have the connection string used in the application, which works fine for adding, say, a datasource to a control etc. But I don't know if there's any way to use it to connect the database inside the Server Explorer so that I can add tables etc. I have read that you're supposed to be able to this instead of using the Sql Server Management Studio, but I have'nt read anything about how to connect to the remote database in it. What I have tried so far is this: I have selected Add database in Server Explorer. This brings up first a dialog where I choose Sql Server. And then I get a dialog where I can set Server name (which I tried using the ip address in the connection string below), and Authentication (where I chose Sql Server Authentication, with the user id and password from below). But when I test the connection it fails. Here's the connection string, which works fine when used for datasources in the application (obviously with different user name and password): Any help appreciated!

    Read the article

  • Doubts about .NET Garbage Collector

    - by Smjert
    I've read some docs about the .NET Garbage Collector but i still have some doubts (examples in C#): 1)Does GC.Collect() call a partial or a full collection? 2)Does a partial collection block the execution of the "victim" application? If yes.. then i suppose this is a very "light" things to do since i'm running a game server that uses 2-3GB of memory and i "never" have execution stops (or i can't see them..). 3)I've read about GC roots but still can't understand how exactly they works. Suppose that this is the code (C#): MyClass1: [...] public List<MyClass2> classList = new List<MyClass2>(); [...] Main: main() { MyClass1 a = new MyClass1(); MyClass2 b = new MyClass2(); a.classList.Add(b); b = null; DoSomeLongWork(); } Will b ever be eligible to be garbage collected(before the DoSomeLongWork finishes)? The reference to b that classList contains, can it be considered a root? Or a root is only the first reference to the instance? (i mean, b is the root reference because the instantiation happens there).

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • Is XMLReader a SAX parser, a DOM parser, or neither?

    - by Renesis
    I am testing various methods to read (possibly large, and very often) XML configuration files in PHP. No writing is ever needed. I have two successful implementations, one using SimpleXML (which I know is a DOM parser) and one using XMLReader. I know that a DOM reader must read the whole tree and therefore uses more memory. My tests reflect that. I also know that A SAX parser is an "event-based" parser that uses less memory because it reads each node from the stream without checking what is next. XMLReader also reads from a stream with the cursor providing data about the node it is currently at. So, it definitely sounds like XMLReader (http://us2.php.net/xmlreader) is not a DOM parser, but my question is, is it a SAX parser, or something else? It seems like XMLReader behaves the way a SAX parser does but does not throw the events themselves (in other words, can you construct a SAX parser with XMLReader?) If it is something else, does the classification it's in have a name?

    Read the article

  • Generate java class and call it's method dynamically

    - by Jacob
    package reflection; import java.io.*; import java.lang.reflect.*; class class0 { public void writeout0() { System.out.println("class0"); } } class class1 { public void writeout1() { System.out.println("class1"); } } class class2 { public void writeout2() { System.out.println("class2"); } } class class3 { public void run() { try { BufferedReader reader= new BufferedReader(new InputStreamReader (System.in)); String line=reader.readLine(); Class cls=Class.forName(line); //define method here } catch(Exception ee) { System.out.println("here "+ee); } } public void writeout3() { System.out.println("class3"); } } class class4 { public void writeout4() { System.out.println("class4"); } } class class5 { public void writeout5() { System.out.println("class5"); } } class class6 { public void writeout6() { System.out.println("class6"); } } class class7 { public void writeout7() { System.out.println("class7"); } } class class8 { public void writeout8() { System.out.println("class8"); } } class class9 { public void writeout9() { System.out.println("class9"); } } class testclass { public static void main(String[] args) { System.out.println("Write class name : "); class3 example=new class3(); example.run(); } } Question is ; third class will read the name of the class as String from console. Upon reading the name of the class, it will automatically and dynamically generate that class and call its writeout method.If that class is not read from input, it will not be initialized. but I can't continue any more ; i need to more something for 3.class, what can i do? Thanks;

    Read the article

  • URL with no query parameters - How to distinguish.

    - by Broken Link
    Env: .NET 1.1 I got into this situation. Where I need to give a URL that someone could redirect them to our page. When they redirect they also need to tell us, what message I need to display on the page. Initially I thought of something like this. http://example.com/a.aspx?reason=100 http://example.com/a.aspx?reason=101 ... http://example.com/a.aspx?reason=115 So when we get this url based on 'reason' we can display different message. But the problem turns out to be that they can not send any query parameters at all. They want 15 difference URL's since they can't send query params. It doesn't make any sense to me to created 15 pages just to display a message. Any smart ideas,that have one URL and pass the 'reason' thru some means? EDIT: Options I'm thinking based on Answers Try HttpRequest.PathInfo or Second option I was thinking was to have a httphanlder read read the path like this - HttpContext.Request.Path based on path act. Ofcourse I will have some 15 entries like this in web.config. <add verb="*" path="reason1.ashx" type="WebApplication1.Class1, WebApplication1" /> <add verb="*" path="reason2.ashx" type="WebApplication1.Class1, WebApplication1" /> Does that look clean?

    Read the article

  • Using JSF, PrimeFaces and JPA: Create Basic WebApp without using Generated CRUD Classes, Forms, etc

    - by user2774489
    I am trying to build a basic CRUD application with NetBeans 7.4, JSF, PrimeFaces and JPA using MySQL. I have successfully done this by using the NetBeans wizards. I want to do this from scratch, no wizards. There seems to be a lack of support for the combo of JSF, PrimeFaces and JPA. When I say "lack", I mean a full example (I might be asking too much), without using the CRUD auto-gen templates/classes AND shows actual queries coded and passed to the datatables(primefaces). YouTube is full of non-English speaking examples using Hibernate (not JPA) and other examples that show flashy GUI's with no code. So far I understand you need an @Entity class (provides the physical build of the tables), a Controller (serializable) and the .xhtml web page to show the datatable.. what else? Also, I'm not seeing any posts or examples where queries are using with JPA/JSF and how they are tied together (in one place). I need to connect the dots here so that I can leverage JSF/JPA to create simple queries to populate my PF DataTables. I've read the blogs and I've googled the intranets until I'm blue in the face. Sending me a list of URL's to read to learn about each product is something I've already done. I get what they do independently, but am looking for the "How do they all connect" answer with maybe some basic code examples!! :)

    Read the article

  • C# class design - expose variables for reading but not setting

    - by James Brauman
    I have a a polygon class which stores a list of Microsoft.Xna.Framework.Vector2 as the vertices of the polygon. Once the polygon is created, I'd like other classes to be able to read the position of the vertices, but not change them. I am currently exposing the vertices through this field: /// <summary> /// Gets the vertices stored for this polygon. /// </summary> public List<Vector2> Vertices { get { return _vertices; } } List<Vector2> _vertices; However you can change any vertex using code like: Polygon1.Vertices[0] = new Vector2(0, 0); or Polygon1.Vertices[0].X = 0; How can I limit other classes to be able to only read the properties of these vertices, and not be able to set a new one to my List? The only thing I can think of is to pass a copy to classes that request it. Note that Vector2 is a struct that is part of the XNA framework and I cannot change it. Thanks.

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • Why does Microsoft advise against readonly fields with mutable values?

    - by Weeble
    In the Design Guidelines for Developing Class Libraries, Microsoft say: Do not assign instances of mutable types to read-only fields. The objects created using a mutable type can be modified after they are created. For example, arrays and most collections are mutable types while Int32, Uri, and String are immutable types. For fields that hold a mutable reference type, the read-only modifier prevents the field value from being overwritten but does not protect the mutable type from modification. This simply restates the behaviour of readonly without explaining why it's bad to use readonly. The implication appears to be that many people do not understand what "readonly" does and will wrongly expect readonly fields to be deeply immutable. In effect it advises using "readonly" as code documentation indicating deep immutability - despite the fact that the compiler has no way to enforce this - and disallows its use for its normal function: to ensure that the value of the field doesn't change after the object has been constructed. I feel uneasy with this recommendation to use "readonly" to indicate something other than its normal meaning understood by the compiler. I feel that it encourages people to misunderstand the meaning of "readonly", and furthermore to expect it to mean something that the author of the code might not intend. I feel that it precludes using it in places it could be useful - e.g. to show that some relationship between two mutable objects remains unchanged for the lifetime of one of those objects. The notion of assuming that readers do not understand the meaning of "readonly" also appears to be in contradiction to other advice from Microsoft, such as FxCop's "Do not initialize unnecessarily" rule, which assumes readers of your code to be experts in the language and should know that (for example) bool fields are automatically initialised to false, and stops you from providing the redundancy that shows "yes, this has been consciously set to false; I didn't just forget to initialize it". So, first and foremost, why do Microsoft advise against use of readonly for references to mutable types? I'd also be interested to know: Do you follow this Design Guideline in all your code? What do you expect when you see "readonly" in a piece of code you didn't write?

    Read the article

  • why am I stuck on "Initiating update" when deploying to google

    - by michelle
    I've have not had any trouble deploying through eclipse until now. I'm guessing it might have to do with all the stuff I've added today a folder of .pdf and .tex files (in war/web-inf directory) a bit of JDO stuff and a servlet that reads the files in the directory and indexes them into the JDO Is there any way to find out what exactly is the problem? I currently get stuck at "Initiating update" and the stack trace say "ConnectionReset" Any helkp of imput will be appreciated, I really need to deploy this today, thanks! here's the deploy trace: Unable to update: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:315) at com.google.appengine.tools.admin.ServerConnection.authenticate(ServerConnection.java:219) at com.google.appengine.tools.admin.ServerConnection.send(ServerConnection.java:145) at com.google.appengine.tools.admin.ServerConnection.post(ServerConnection.java:81) at com.google.appengine.tools.admin.AppVersionUpload.send(AppVersionUpload.java:427) at com.google.appengine.tools.admin.AppVersionUpload.beginTransaction(AppVersionUpload.java:241) at com.google.appengine.tools.admin.AppVersionUpload.doUpload(AppVersionUpload.java:98) at com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:56) at com.google.appengine.eclipse.core.proxy.AppEngineBridgeImpl.deploy(AppEngineBridgeImpl.java:271) at com.google.appengine.eclipse.core.deploy.DeployProjectJob.runInWorkspace(DeployProjectJob.java:148) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFieldKey(Unknown Source) at com.google.appengine.tools.util.ClientCookieManager.readCookies(ClientCookieManager.java:123) at com.google.appengine.tools.admin.ServerConnection.connect(ServerConnection.java:340) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:314) ... 11 more

    Read the article

  • How do I get Google Chrome's root bookmarks folder?

    - by Wayne
    Hi, I'm trying to write a better bookmark manager in chrome extensions. The problem is there are no simple examples (that I can find) about how to actually use the bookmarks api (available here: http://code.google.com/chrome/extensions/bookmarks.html ) I've looked at the example source (when I d/led and installed it on my computer it didn't do anything except provide a search box. Typing/typing and pressing return failed to do anything) and can't find anything useful. My ultimate goal is to make an extension that allows me to save pages to come and read later without having to go sign up for an account on some service somewhere. So I plan to create either one or two bookmark folders in the root folder/other bookmarks - at minimum an "unread pages" folder. In that folder I'll create the unread bookmarks. When the user marks the item as read, it will be removed from that folder. So that's what I'm trying to do... any help will be greatly appreciated, even if it's just pointing me to some good examples. UPDATE: ...<script> function display(tree){ document.getElementById("Output").innerHTML = tree; } function start(){ chrome.bookmarks.getTree(display); } </script> </head> <body> <h4 id="Output"></h4> <script> start(); </script> ... That displays [object Object], that suggests (at least to me with a limited JavaScript experience) that an object exists. But how to access the members of that object? changing tree to tree.id or any other of what look to be parameters displays "undefined".

    Read the article

  • Postback Removing Styling from Page

    - by Roy
    Hi, Currently I've created a ASP.Net page that has a dropdown control with autopostback set to true. I've also added color backgrounds for individual listitems. Whenever an item is selected in the dropdown control the styling is completely removed from all of the list items. How can I prevent this from happening? I need the postback to pull data based on the dropdown item that is selected. Here is my code. aspx file: <asp:DropDownList ID="EmpDropDown" AutoPostBack="True" OnSelectedIndexChanged="EmpDropDown_SelectedIndexChanged" runat="server"> </asp:DropDownList> <asp:TextBox ID="MessageTextBox" TextMode="MultiLine" Width="550" Height="100px" runat="server"></asp:TextBox> aspx.cs code behind: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GetEmpList(); } } protected void EmpDropDown_SelectedIndexChanged(object sender, EventArgs e) { GetEmpDetails(); } private void GetEmpList() { SqlDataReader dr = ToolsLayer.GetEmpList(); int currentIndex = 0; while (dr.Read()) { EmpDropDown.Items.Add(new ListItem(dr["Title"].ToString(), dr["EmpKey"].ToString())); if (dr["Status"].ToString() == "disabled") { EmpDropDown.Items[currentIndex].Attributes.Add("style", "background-color:red;"); } currentIndex++; } dr.Close(); } private void GetEmpDetails() { SqlDataReader dr = ToolsLayer.GetEmpDetails(EmpDropDown.SelectedValue); while (dr.Read()) { MessageTextBox.Text = dr["Message"].ToString(); } dr.Close(); } Thank You

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • converting a timestring to a duration

    - by radman
    Hi, At the moment I am trying to read in a timestring formatted and create a duration from that. I am currently trying to use the boost date_time time_duration class to read and store the value. boost date_time provides a method time_duration duration_from_string(std::string) that allows a time_duration to be created from a time string and it accepts strings formatted appropriately ("[-]h[h][:mm][:ss][.fff]".). Now this method works fine if you use a correctly formatted time string. However if you submit something invalid like "ham_sandwich" or "100" then you will still be returned a time_duration that is not valid. Specifically if you try to pass it to a standard output stream then an assertion will occur. My question is: Does anyone know how to test the validity of the boost time_duration? and failing that can you suggest another method of reading a timestring and getting a duration from it? Note: I have tried the obvious testing methods that time_duration provides; is_not_a_date_time(), is_special() etc and they don't pick up that there is an issue. Using boost 1.38.0

    Read the article

  • unable to get values from JSON converted from XML

    - by Dkong
    I have passed some JSON to my page via a webservice. I used JSON.NET to convert XML to JSON. The JSON output looks ok to me, but I am unable to access some of the items in the response. I am not sure why it is not working. I am using jQuery to read the response and make the webservice call. Even when I try to read the length of the array it says 'undefined' function GetFeed(){ document.getElementById("marq").innerHTML = ''; $.ajax({ type: "POST", url: "ticker.asmx/GetStockTicker", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(response) { var obj = (typeof response.d) == 'string' ? eval('(' + response.d + ')') : response.d; for (var i = 0; i < obj.length; i++) { $('#marq').html(obj[i].person); } } }); } This is a copy and paste of my response as it appeared in firebug: {"d":"{\"?xml\":{\"@version\":\"1.0\",\"@standalone\":\"no\"},\"root\":{\"person\":[{\"@id\":\"1\",\"name\":\"Alan\",\"url\":\"http://www.google.com\"},{\"@id\":\"2\",\"name\":\"Louis\",\"url\":\"http://www.yahoo.com\"}]}}"}

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Use XML Layout to contain a simple drawing

    - by user329999
    I would like to create a simple drawing (lines, circles, squares, etc...) but I'm having difficulty figuring out the best way to do this. The drawing would need to be scaled to fit the display since the size is indirectly specified by the user (like in a CAD application). Also, I don't want to take up the entire display, leaving room for some controls (buttons, etc). I would pass the data to describe the drawing. Here's how I imagine it would work. I create an XML layout that contains something that holds the drawing (ImageView, BitmapDrawable, ShapeDrawable, ...??? not sure exactly what). Then in my Activity I would load the main XML and obtain the resource for the control that holds the drawing. I would then draw to a bitmap. Once the bitmap was completed I would load it into the control that is to hold the drawing. Somewhere along this path it would be scaled to fill the entire area allocated for the drawing in the XML layout. I don't know if my approach is the way to do this or what classes to use. I read the http://developer.android.com/guide/topics/graphics/2d-graphics.html documentation, but it's not helping me with an example. The examples I do find leave me with hints, but nothing concrete enough to do what I want, especially when it comes to scaling, using XML and/or having other controls. Also, there seems to be no good documentation on the design of the 2D drawing system in a more conceptual manner, so it makes what I read difficult to put into any useful context. Any hints on what classes would be useful and/or a good example any other reading materials? Thanks

    Read the article

  • How to trigger the event together on each two deferent class.

    - by XBasic3000
    I have two object class on a single unit, is it posible to trigger the two events? let say the FIRSTCLASS event is fired, The SECONDCLASS also will fired? Assuming...... //{Class 1}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMyFirstClass = Class(Tcomponent) private .... public .... OnEventTrigger : TOnEventTrigger read Fevent write Fevent; end; procedure TMyFirstClass.FEvnt(Sender : Tobject; Value :integer); begin // here is normaly triggers the event // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); // POSTMessage(GetForegroundWindow,WM_USER+3,0,0); // this is what i did here to get the result of FSomevalue // but this is not ideal. It work only on focus window. end; //{Class 2}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMySecondClass = Class(Tobject) private .... public .... OnEventTrigger : TOnEventTrigger; read Fevent write Fevent; end; procedure TMySecondClass.FEvnt(Sender : Tobject; Value :integer); begin // I wanted here to trigger, whenenver the above is fired // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); end;

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • How to not use JavaScript with in the elements events attributes but still load via AJAX

    - by thecoshman
    I am currently loading HTMl content via AJAX. I have code for things on different elements onclick attributes (and other event attributes). It does work, but I am starting to find that the code is getting rather large, and hard to read. I have also read that it is considered bad practice to have the event code 'inline' like this and that I should really do by element.onclick = foobar and have foobar defined somewhere else. I understand how with a static page it is fairly easy to do this, just have a script tag at the bottom of the page and once the page is loaded have it executed. This can then attach any and all events as you need them. But how can I get this sort of affect when loading content via AJAX. There is also the slight case that the content loaded can very depending on what is in the database, some times certain sections of HTML, such as tables of results, will not even be displayed there will be something else entirely. I can post some samples of code if any body needs them, but I have no idea what sort of things would help people with this one. I will point out, that I am using Jquery already so if it has some helpful little functions that would be rather sweet¬

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >