Search Results

Search found 1329 results on 54 pages for 'garbage collecting'.

Page 20/54 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • New Content: Partner News and Workforce Management Special Report

    - by user462779
    Two new bits of content available on Profit Online: Oracle partner Edgewater Ranzal worked with customer High Sierra Energy to integrate Oracle Hyperion Enterprise Performance Management solutions with Oracle E-Business Suite and simplify an increasingly complex financial reporting system. "They needed to eliminate the older processes where 80% of the time was spent on collecting data and only 20% on analyzing the data.” --Bob Sanders, business development manager, Edgewater Ranzal. In a special report about Workforce Management, Profit wraps up a collection of recent content on the subject and looks at Oracle's recent agreement to acquire SelectMinds. “By adding SelectMinds to Oracle’s Talent Management Cloud, Oracle can help customers with a complete talent management solution, enabling streamlined recruiting practices, more quality referrals, faster employee on-boarding, and better performance.” --Thomas Kurian, Executive Vice President, Oracle Development More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • I've had Tomboy twice delete a single important note. What's going on there?

    - by Mittenchops
    Not sure what's going on here, but here's the best I can describe: I use Tomboy extensively, for notes, collecting throw-away passwords,etc. Once a couple of months ago, a very important note just disappeared. No reference to the title, no deleting, no way of tracking it down---it was just like it had never existed. The same thing just happened today. I think the note in question was actually open when I rebooted, so maybe that's related. Has anyone else encountered something similar? EDIT: Happened 3 times Seems to be the last note I have open when shutting down (which is usually the most important) is deleted without a trace.

    Read the article

  • P-Commerce – What The Heck Is That?

    - by Michael Hylton
    We’ve heard of e-commerce, m-commerce (Mobile Commerce), and f-commerce (Facebook Commerce) but what is p-commerce?  It’s not truly a customer touchpoint or channel but the emphasis on personalization of the buying experience. Ask yourself how well do you know your customer?  Are you able to take what you know about them and apply it to their commerce activity with you and personalize the shopping experience? Much of this is dictated by have a complete 360 degree view of your customer, collecting data from your website, sales interactions, historical commerce purchases, call center activity, how they got to your website, etc. and applying it to their current commerce interaction.  Customers expect to have a similar interaction on your website as they would in your brick-and-mortar store, displaying the products and services that they might be interested in purchasing.

    Read the article

  • Are VM-based languages becoming viable for Graphics since the move to GPU computing?

    - by skiwi
    Perhaps the title is not the most clear, so let me elaborate it more: I am talking about VM-based languages, by that I mean languages that run on the JVM (java) and for example C#. Also I am talking about 3D graphics, just to be clear. Lately the trend has been that most computing is being done on the GPU and not on the CPU, and since times the issue with programming games on a VM-based language is that garbage collecting may happen randomly. So let's take a look which is responsible for what: Showing the graphics: GPU Uploading graphics to the GPU: CPU? Needs to be done every frame? Calculating physics constraints: GPU Doing the real game logic (Determining when to move objects (independent of physics calculations), processing AI): CPU Is my list actually correct? And if it is, is for example Java becoming more viable? Or is uploading the graphics (vertices) still the most expensive operation? Would like to get more insight into this.

    Read the article

  • Google Scholar Realted Question

    - by Art
    I have just requested Google Scholar to use my web site for collecting papers from my personal web site: http://cs.uic.edu/~asmirnov/publications.html I was wondering if I did everything right: I submitted a request on the form provided on scholar web site I published the papers in PDF on my web site Is there anything else needed for Google to index my web site? Other questions are: 1. The first paper (link to it) is not to just paper, but to the whole issue. 2. Are there any tages to be added on my web site, if so, then which and how do I add them? 3. What are those exporting options available on google scholar web site and how do they work? Thank you very much for being patient with me and my questions as well.

    Read the article

  • Poll: What kind of computer should a corporate provide to their developers.

      I have been collecting a poll to almost any developer I find on my way as well as business owners. I have been looking to see what a small development shop, medium size company and a big corporate considers when providing development computers to their software developers. This has being very intriguing to me. In my career I found different software developers, the ones that work always with one product and one framework, as well as the one that keeps a few frameworks and projects. Of course...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Open in explorer view not working SOMETIMES !!

    - by H(at)Ni
    Hello, As weird as it seems to anyone who used it before, most of the time explorer view does not work until some steps to be followed, but in my case it was working and sometimes randomly not working ! After spending hours of troubleshooting and collecting logs, Network traces, Fiddler traces, etc. I reached the solution from the Network trace. Although it seems strange, it was sending a PROPFIND request to the root directory "/" which was actually deleted. So, I came up to this important article that states that you must have a root site collection in your SharePoint web application in order to keep it in a supported state. http://support.microsoft.com/kb/2590564 And actually that explained it and solved the strange behavior as well. Cheers,

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Error when updating BlackBerry JDE Plug-in for Eclipse (v5.0 Beta 3) ?

    - by Ashraf Bashir
    I tried to update Blackberry JDE plug-in for eclipse from v4.5 to v5.0 Beta 3. I followed the instructions in this page: http://na.blackberry.com/eng/developers/devbetasoftware/updatesite.jsp but unfortunately I got the following error while updating: An error occurred while collecting items to be installed. No repository found containing: net.rim.eide.feature.componentpack5.0.0/org.eclipse.update.feature/5.0.0.14 How this could be solved ? Any suggestions ?

    Read the article

  • SVNKit: Commit files that were manually deleted from filesystem( Work Copy)

    - by Jam
    I can not solve the problem with collecting CommitItem(changes that commit), or more accurately, I have no porblem with the changed and added files BUT files that I manually deleted from the file system is not seen in CommitItem list ... And those changes can not commit to the SVN server. If I delete a file using the API, then the problem does not exist... but manually deleting ... Has anyone had a similar problem?

    Read the article

  • F#: Recursive collect and filter over N-ary Tree

    - by RodYan
    This is hurting my brain! I want to recurse over a tree structure and collect all instances that match some filter into one list. Here's a sample tree structure type Tree = | Node of int * Tree list Here's a test sample tree: let test = Node((1, [Node(2, [Node(3,[]); Node(3,[])]); Node(3,[])])) Collecting and filtering over nodes with and int value of 3 should give you output like this: [Node(3,[]);Node(3,[]);Node(3,[])]

    Read the article

  • Short survey on Enterprise JavaBeans usage

    - by Thomas Harris
    Hello, I would be very appreciative if anyone who has any experience with using Enterprise JavaBeans, or who considered, but rejected the use of EJBs would respond to a short survey. The survey consists of eleven (11) questions, and should take five (5) minutes or less to complete. I am collecting this data for a class that I am taking. The URL for the survey is http://cs.createsurvey.com/c/89/9089/survey/11793-UCyRqE.html Thank you very much in advance for your participation! Regards, Tom Harris

    Read the article

  • Collect a list in JS with regex?

    - by acidzombie24
    Basically i want to output a div with all the ids in it. I know how to do that especially with jquery but collecting the ids is the problem not the list. I failed at regex although i know how to use it in .NET. How do i write this regex properly? so i can get a list of ids and display it as i want. http://jsfiddle.net/NgmGf/

    Read the article

  • Is there a W3C valid way to disable autocomplete in a HTML form?

    - by matt b
    When using the xhtml1-transitional.dtd doctype, collecting a credit card number with the following HTML <input type="text" id="cardNumber" name="cardNumber" autocomplete='off'/> will flag a warning on the W3C validator: there is no attribute "autocomplete". Is there a W3C / standards way to disable browser auto-complete on sensitive fields in a form?

    Read the article

  • Getting Data From Webpages?

    - by fuzzygoat
    When looking to get data from a web page whats the recommended method if the page does not provide a structured data feed? Am I right in thinking that its just a case of doing an NSURLRequest and then hacking what you need out of the responseData(NSData*)? I am not too concerned about the implementation in Xcode, I am more curious about actually collecting the data, before I start coding a "hunt & peck" through a list of data. gary

    Read the article

  • Getting Data For Webpages?

    - by fuzzygoat
    When looking to get data from a web page whats the recommended method if the page does not provide a structured data feed? Am I right in thinking that its just a case of doing an NSURLRequest and then hacking what you need out of the responseData(NSData*)? I am not too concerned about the implementation in Xcode, I am more curious about actually collecting the data, before I start coding a "hunt & peck" through a list of data. gary

    Read the article

  • Send comma delimited CSV through SFTP?

    - by JM4
    I am collecting registration information on my site and need to figure out how to pass all data stored in the MySQL DB (or just portions of it) as a comma delimited CSV file through an SFTP so our partners can access the information. The pages are built using PHP. I literally have no idea how to do this and am hoping somebody has experience doing so. Thanks ahead of time!

    Read the article

  • How do I aggregate activerecord model data for a specific time period?

    - by gsiener
    I'm collecting data from a system every ~10s (this time difference varies due to communication time with networked devices). I'd like to calculate averages and sums of the stored values for this activerecord model on a daily basis. All records are stored in UTC. What's the correct way to sum and average values for, e.g., the previous day from midnight to midnight EST? I can do this in sql but don't know the "rails way" to make this calculation.

    Read the article

  • Facebook Login Api

    - by shyam
    I am developing a mobile application that let users post pictures to facebook in J2ME. For this within the app I am collecting user's facebook username and password. How can I use these login credentials to post pictures to facebook. I came across http://code.google.com/p/facebook-java-api/ but will this allow me to do the same? I have used nokia community software that does this. Please Help

    Read the article

  • Testing code that uses SoftReference<T>

    - by bmargulies
    To get any code with SoftReference<T> to be fully tested, one must come up with some way to test the 'yup, it's been nulled' case. One might more or less mock this by using a 'for-test' code path to force the reference to be null, but that won't manage the queue exactly as the GC does. I wonder if anyone out can share experience in setting up a repeatable, controlled, environment, in which the GC is, in fact, provoked into collecting and nulling?

    Read the article

  • 2 roles, admin and user. Is using anything other than basic http auth overkill?

    - by juststarting
    I'm building my first website with rails,it consists of a blog, a few static pages and a photo gallery. The admin section has namespaced controllers. I also want to create a mailing list, collecting contact info, (maybe a spree store in the future too.) Should I just use basic http authentication and check if the user is admin? Or is a plugin like authlogic better, then define user roles even though there would only be two; admin and user?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >