Search Results

Search found 1848 results on 74 pages for 'significant'.

Page 38/74 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Data Integration/EAI Project Lessons Learned

    - by Greg Harman
    Have you worked on a significant data or application integration project? I'm interested in hearing what worked for you and what didn't and how that affected the project both during and after implementation (i.e. during ongoing operation, maintenance and expansion). In addition to these lessons learned, please describe the project by including a quick overview of: The data sources and targets. Specifics are not necessary, but I'd like to know general technology categories e.g. RDBMS table, application accessed via a proprietary socket protocol, web service, reporting tool. The overall architecture of the project as related to data flows. Different human roles in the project (was this all done by one engineer? Did it include analysts with a particular expertise?) Any third-party products utilized, commercial or open source.

    Read the article

  • View centric design with Django

    - by wishi_
    Hi! I'm relatively new to Django and I'm designing a website that primarily needs usability experience, speaking of optimized CSS, HTML5 and UI stuff. It's very easy to use Django for data/Model centric design. Just designing a couple of Python classes and ./manage.py syncdb - there's your Model. But I'm dealing with a significant amount of View centric challenges. (Different user classes, different tasks, different design challenges.) The official Django tutorial cursorily goes through using a "Template". Is there any Design centric guide for Django, or a set of Templates that are ready and useable? I don't want to start from scratch using JS, HTML5, Ajax and everything. From the Model layer perspective Django is very rapid and delivering a working base system. I wonder whether there's something like that for the Views.

    Read the article

  • Computer science advances in the past 5 years

    - by Doug Stanhope
    I don't have a computer science background and only have a rudimentary knowledge of what CS is all about. However, I wonder, what are the most significant CS advances of the last five years? To give you an idea of how clueless I am, I couldn't name one of these advances. But, please don't spare me all the gory details. I'm not looking for an education in CS or a story about the history of CS. As far as this question is concerned only the past five years matter! :-)

    Read the article

  • Caching generated QR Code

    - by Michal K
    I use zxing to encode a qr code and store it as a bitmap and then show it in ImageView. Since the image generation time is significant I'm planning to move it to a separate thread (AsyncTaskLoader will be fine I think). The problem is - it's an image and I know that to avoid memory leaks one should never store a strong reference to it in an Activity. So how would you do it? How to cache an image to survive config changes (phone rotation) and generally avoid generating it onCreate()? Just point me in the right direction, please.

    Read the article

  • Is it possible to temporarily disable Python's string interpolation?

    - by dangerouslyfacetious
    I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module. The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags. { "log_level":logging.debug, "log_name":"C:\\Temp\\logfile.log", "format_string": "%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s" } My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to. Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter). Thanks in advance, Dominic

    Read the article

  • IronPython :- Visual Studio 2010 or SharpDevelop?

    - by Cruachan
    I'm considering developing a medium-size project for a client in IronPython. It's a pretty straightforward replacement for an existing system I've been supporting for several years, so the specification is quite well defined and understood. This is my first significant IronPython and .Net project so I'm expecting a bit of a learning curve. I was going to use SharpeDevelop, but I can purchase VisualStudion 2010 for a reasonable price and whilst I understood that IronPython Tools for Visual Studio 2008 were not so good, I haven't seen anything about the update for 2010 yet. Has anyone used either or both of these in a reasonable-sized commercial environment and do you have any recommendations? (and I'm aware of this question, but this is specifically about VS2010)

    Read the article

  • WinUSB application or User-Mode Driver as a filter driver for USB Analysis/Sniffer/Trending

    - by Robert
    A question to maybe some who have worked extensively with WinUSB APIs or use mode USB drivers - Does anyone know if the WinUSB API or a user mode driver can be used as a passive observer of USB connections, capturing notification of interrupts, control requests, data transfers...etc without interfering with other applications (such as iTunes) which would obviously require concurrent access to the device at the same time my application is monitoring the connection and displaying data on it? Or do you pretty much have to write a kernel-mode filter driver and inject yourself in the USB stack in order to make that happen? In the past, there have been a few credible options (libusb-win32 and usbsnoop to be specific) though both are built around the old DDK, not the Windows Driver Foundation, and are not really supported on a regular basis any more. I'm hesitant to build something significant around them, as a result.

    Read the article

  • Make process crash on large memory allocation

    - by Pieter
    I'm trying to find a significant memory leak (15MB at a time, but doing allocations like this on multiple places). I checked the most obvious places, and then used AQTime, but I still can't pinpoint it. Now I see 2 options left: 1) Use SetProcessWorkingSetSize: I've tried this but my process happily keeps on running when using up more then 150MB: DWORD MemorySize = 150*1024*1024; SetProcessWorkingSetSize( GetCurrentProcess(), MemorySize/2, MemorySize*2 ); 2) Put a breakpoint when allocating more then 1MB at a time. How should I do this, overload operator new with an 'if1MB' inside ?

    Read the article

  • ORM solutions (JPA; Hibernate) vs. JDBC

    - by Grasper
    I need to be able to insert/update objects at a consistent rate of at least 8000 objects every 5 seconds in an in-memory HSQL database. I have done some comparison performance testing between Spring/Hibernate/JPA and pure JDBC. I have found a significant difference in performance using HSQL.. With Spring/Hib/JPA, I can insert 3000-4000 of my 1.5 KB objects (with a One-Many and a Many-Many relationship) in 5 seconds, while with direct JDBC calls I can insert 10,000-12,000 of those same objects. I cannot figure out why there is such a huge discrepancy. I have tweaked the Spring/Hib/JPA settings a lot trying to get close in performance without luck. I want to use Spring/Hib/JPA for future purposes, expandability, and because the foreign key relationships (one-many and many-many) are difficult to maintain by hand; but the performance requirements seem to point towards using pure JDBC. Any ideas of why there would be such a huge discrepancy?

    Read the article

  • How to skip "Loose Object" popup when running 'git gui'

    - by Michael Donohue
    When I run 'git gui' I get a popup that says This repository currently has approximately 1500 loose objects. It then suggests compressing the database. I've done this before, and it reduces the loose objects to about 250, but that doesn't suppress the popup. Compressing again doesn't change the number of loose objects. Our current workflow requires significant use of 'rebase' as we are transitioning from Perforce, and Perforce is still the canonical SCM. Once Git is the canonical SCM, we will do regular merges, and the loose objects problem should be greatly mitigated. In the mean time, I'd really like to make this 'helpful' popup go away.

    Read the article

  • Multiple Service Address Configs in WCF Silverlight App

    - by CraigS
    My team is building our first significant Silverlight application, using a 3 layered architecture and WCF. We have developed about 10 separate WCF services in the middle layer so far, and this number is only going to grow. Generally, the presentation layer (ie. the Silverlight app) is pointing to the services as hosted on our dev server. However, there are times when I want it to access the services from localhost - ie. the developers machine. Is there an easy way to change where the presentation layer is looking for the services? Is there some way of easily switching between options here?

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • How to deal with color loss on GDI+ Image Resize?

    - by user125775
    Hello All, I am resizing images with C#/GDI+ using the following routing bmpOut = new Bitmap(lnNewWidth, lnNewHeight); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBilinear; g.FillRectangle(Brushes.White, 0, 0, lnNewWidth, lnNewHeight); g.DrawImage(loBMP, 0, 0, lnNewWidth, lnNewHeight); and encoding it with the highest quality. System.Drawing.Imaging.Encoder qualityEncoder = System.Drawing.Imaging.Encoder.Quality; EncoderParameter myEncoderParameter = new EncoderParameter(qualityEncoder, 100L); However, the images that I get have significant loss of color (I am using JPG images only). The quality is perfect, but color is washed away. Do you have any idea what is goingf on? Thanks a lot in advance.

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • read subprocess stdout line by line

    - by Caspin
    My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file, but only show some of it to the user. I thought the following would work, but the output does show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over time import time i = 0 while True: print hex(i)*512 i += 1 time.sleep(0.5) #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout.subprocess.PIPE) for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible?

    Read the article

  • Cost to GC of using weak references in C#?

    - by Scott Bilas
    In another question, Stephen C says: A second concern is that there are runtime overheads with using weak references. The obvious costs are those of creating weak references and calling get on them. A less obvious cost is that significant extra work needs to be done each time the GC runs. So what exactly is the cost to the GC of a weak ref? What extra work does it need to do, and how big of a deal is it? I can make some educated guesses, but am interested in the actual mechanics.

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Is it possible to use DLR in a .NET 3.5 website project?

    - by Aplato
    I'm trying to evaluate an expression stored in a database i.e. "if (Q1 ==2) {result = 3.1;} elseif (Q1 ==3){result=4.1;} else result = 5.9;" Rather than parsing it myself I'm trying to use the DLR. I'm using version .92 from the Codeplex repository and my solution is a .NET 3.5 website; and I'm having conflicts between the System.Core and Microsoft.Scripting.ExtenstionAttribute .dll's. Error = { Description: "'ExtensionAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'.", File: "InternalXmlHelper.vb" } At this time I cannot upgrade to .NET 4.0 and make significant use of the .net 3.5 features (so downgrading is not an option). Any help greatly appreciated.

    Read the article

  • iphone Memory gets freed in debug mode but not in release mode

    - by gdr
    I have been testing my iPhone debug build on both my device and simulator with activity monitor, leaks, and object allocations. The code is pretty well optimized so I have decided to test the release build. I went into the project Menu and set the target build to be release, I then added the necessary header paths that my app is using to the headers search paths and ran the release build on the device with the above mentioned instruments. What I have noticed now is that memory that was freed when I used the debug build does not get freed when using release version. There is one place in my App that I remove a scroll view with some images which frees up a significant amount of memory when I use the debug build, but no memory is freed up in that place when I use the release version. Does someone have any ideas where I need to start looking at? did I setup my release build wrong?

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • Running out of memory but not seeing excessive object allocation in Instruments

    - by Scotty Allen
    I have an iPad app that's crashing due to low memory. However, Instruments doesn't show any significant amount of memory allocated using ObjectAlloc - it stays under 1MB for the lifetime of the application. Leaks shows less than 1kB leaked over the course of the run. Memory monitor shows the free memory on the devices drop significantly with use, eventually dropping to the point that it's out of memory. Here's a screenshot from Instruments: I'm totally stumped. As far as I can tell, this basically says that as far as my app is concerned, I'm never using more than about 750kB, but that the device is still running out of physical memory, which is causing my app to crash/force exit. I'm new to debugging memory issues with XCode. Am I measuring this wrong? Is there another way to see where this memory is going?

    Read the article

  • Events raised by BackgroundWorker not executed on expected thread

    - by Topdown
    A winforms dialog is using BackgroundWorker to perform some asynchronous operations with significant success. On occasion, the async process being run by the background worker will need to raise events to the winforms app for user response (a message that asks the user if they wish to cancel), the response of which captured in an CancelEventArgs type of the event. Being an implementation of threading, I would have expected the RaiseEvent of the worker to fire, and then the worker would continue, hence requiring me to pause the worker until the response is received. Instead however, the worker is held to wait for the code executed by the raise event to complete. It seems like method I am calling via the event call is actually on the worker thread used by the background worker, and I am surprised, since I expected to see it on the Main Thread which is where the mainform is running. Also surprisingly, there are no cross thread exceptions thrown. Can somebody please explain why this is not as I expect?

    Read the article

  • Is is better to store serialized data or raw html in mysql?

    - by Yegor
    I de-normalized my database, since the application was crawling otherwise, and Im storing a list of categories for each item in the DB as a raw html version, and simply echoing it out in my design. Each category is actually a link, which is include a tag. Naturally, this is abit of a pain, especially if I want to change the look of how the category links are displayed, since I gotta update all the old cached entries. What if I were to store this data as a serialized array instead, and simply unserialize it, and then apply formatting to it in php. Would there be a significant performance decrease over simply echoing out the raw html?

    Read the article

  • Delete all records that have no foreign key constraints

    - by Rodney Burton
    I have a SQL 2005 table with millions of rows in it that is being hit by users all day and night. This table is referenced by 20 or so other tables that have foreign key constraints. What I am needing to do on a regular basis is delete all records from this table where the "Active" field is set to false AND there are no other records in any of the child tables that reference the parent record. What is the most efficient way of doing this short of trying to delete each one at a time and letting it cause SQL errors on the ones that violate constraints? Also it is not an option to disable the constraints and I cannot cause locks on the parent table for any significant amount of time.

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >