Search Results

Search found 622 results on 25 pages for 'cleaning'.

Page 19/25 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Odd Infragistics UltraComboEditor data binding non-bug

    - by Richard Dunlap
    Within an Infragistics 8.2 UltraComboEditor, we had the following properties set via C#: DataSource = dataSource; ValueMember = "Measure"; DisplayMember = "Name"; DataBindings.Add("Value", repository, "Measure"); DataBindings["Value"].DataSourceUpdateMode = DataSourceUpdateMode.OnPropertyChanged; where dataSource was an array of objects, each with a property Measure, and repository was an object with a property Measure. (Those strings are actually constructor parameters -- just using explicit strings to simplify the example.) In the course of some refactoring, the name of the property on the objects in the array was changed to BaseEnum (the objects are actually wrapped enumerations, for the curious), but the name of ValueMember above was not changed. And yet, the combo box binding continued to work through initial testing, beta testing, and even after release... until two customers emailed in noting that the combo box was no longer changing the underlying parameter. We were able to dig out the problem by careful study of the source code repository... despite being in the awkward position of not being able to replicate the buggy behavior internally. Two part question: What's happening under the hood that allowed the binding to continue to function, and/or what might be unique about those two users that caused the binding to (correctly) fail? (O/S version isn't alone the answer, and we get the unexpectedly functioning binding on machines that have never had a version of the software before, so we're not looking at rogue binaries). Are there tools that might have been able to warn us about the misbind, even if something was cleaning up behind?

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • JACOB (Java/COM/ActiveX) - How to troubleshoot event handling?

    - by Youval Bronicki
    I'm trying to use JACOB to interact with a COM object. I was able to invoke an initialization method on the object (and to get its properties), but am not getting any events back. The code is quoted below. I have a sample HTML+Javascript page (running in IE) that successfully receives events from the same object. I'm considering the following options, but would appreciate any concrete troubleshooting ideas ... Send my Java program to the team who developed the COM object, and have them look for anything suspicious on their side (does the object have a way on knowing whether there's a client listening to its events, and whether they were successfully delivered?) Get into the native parts of JACOB and try to debug on that side. That's a little scary given that my C++ is rusty and that I've never programmed for Windows. public static void main(String[] args) { try { ActiveXComponent c = new ActiveXComponent( "CLSID:{********-****-****-****-************}"); // My object's clsid if (c != null) { System.out.println("Version:"+c.getProperty("Version")); InvocationProxy proxy = new InvocationProxy() { @Override public Variant invoke(String methodName, Variant[] targetParameters) { System.out.println("*** Event ***: " + methodName); return null; } }; DispatchEvents de = new DispatchEvents((Dispatch) c.getObject(), proxy); c.invoke("Init", new Variant[] { new Variant(10), //param1 new Variant(2), //param2 }); System.out.println("Wating for events ..."); Thread.sleep(60000); // 60 seconds is long enough System.out.println("Cleaning up ..."); c.safeRelease(); } } catch (Exception e) { e.printStackTrace(); } finally { ComThread.Release(); } }

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • No test coverage files generated for Unit Test bundle in Xcode

    - by John Gallagher
    The Problem I've got a Cocoa project on the desktop and I'm using Xcode 3.2.1 on Snow Leopard 10.6.2. I want to generate code coverage files for my Unit Test Target in Xcode. What I've Tried As articles like this one suggest, I've adjusted the build settings to: “Generate Test Coverage Files” checked “Instrument Program Flow” checked “-lgcov” added to “Other Linker Flags” I've also set the Run Script section of the test target to have the following: # Run the unit tests in this test bundle. "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" # Run gcov on the framework getting tested if [ "${CONFIGURATION}" = 'Coverage' ]; then FRAMEWORK_NAME=LapsusInterpretationEngine FRAMEWORK_OBJ_DIR=${OBJROOT}/${FRAMEWORK_NAME}.build/${CONFIGURATION}/EngineTests.build/Objects-normal/${NATIVE_ARCH} mkdir -p coverage pushd coverage find ${OBJROOT} -name *.gcda -exec gcov -o ${FRAMEWORK_OBJ_DIR} {} \; popd fi Since my Framework name is LapsusInterpretationEngine but my target is named EngineTests, I put this directly into the FRAMEWORK_OBJ_DIR but this didn't seem to help. I've tried cleaning before building. I've made sure all the above build settings apply to both the Unit Test Target and the Application Target. What I Get No .gcda or .gcno files anywhere in the build directory I'm using. I point CoverStory to the Objects-normal directory in my builds folder and it complains that there's nothing there for it to read. I must be doing something really obvious wrong. Anyone any ideas? I have tried the "EngineTests.build" directory being ${FRAMEWORK_NAME} and this gives the same results.

    Read the article

  • Exclude debug javascript code during minification

    - by Tauren
    I looking into different ways to minify my javascript code including the regular JSMin, Packer, and YUI solutions. I'm really interested in the new Google Closure Compiler, as it looks exceptionally powerful. I noticed that Dean Edwards packer has a feature to exclude lines of code that start with three semicolons. This is handy to exclude debug code. For instance: ;;; console.log("Starting process"); I'm spending some time cleaning up my codebase and would like to add hints like this to easily exclude debug code. In preparation for this, I'd like to figure out if this is the best solution, or if there are other techniques. Because I haven't chosen how to minify yet, I'd like to clean the code in a way that is compatible with whatever minifier I end up going with. So my questions are these: Is using the semicolons a standard technique, or are there other ways to do it? Is Packer the only solution that provides this feature? Can the other solutions be adapted to work this way as well, or do they have alternative ways of accomplishing this? I will probably start using Closure Compiler eventually. Is there anything I should do now that would prepare for it?

    Read the article

  • Is There a Time at which to ignore IDisposable.Dispose?

    - by Mystagogue
    Certainly we should call Dipose() on IDisposable objects as soon as we don't need them (which is often merely the scope of a "using" statement). If we don't take that precaution then bad things, from subtle to show-stopping, might happen. But what about "the last moment" before process termination? If your IDisposables have not been explicitly disposed by that point in time, isn't it true that it no longer matters? I ask because unmanaged resources, beneath the CLR, are represented by kernel objects - and the win32 process termination will free all unmanaged resources / kernel objects anyway. Said differently, no resources will remain "leaked" after the process terminates (regardless if Dispose() was called on lingering IDisposables). Can anyone think of a case where process termination would still leave a leaked resource, simply because Dispose() was not explicitly called on one or more IDisposables? Please do not misunderstand this question: I am not trying to justify ignoring IDisposables. The question is just technical-theoretical. EDIT: And what about mono running on Linux? Is process termination there just as "reliable" at cleaning up unmanaged "leaks?"

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • Managing resource closure in a servlet container

    - by Steven Schlansker
    I'm using Tomcat as a servlet container, and have many WARs deployed. Many of the WARs share common base classes, which are replicated in each context due to the different classloaders, etc. How can I ensure resource cleanup on context destruction, without hooking each and every web.xml file to add context listeners? Ideally, I'd like something along the lines of class MyResourceHolder implements SomeListenerInterface { private SomeResource resource; { SomeContextThingie.registerDestructionListener(this); } public void onDestroy() { resource.close(); } } I could put something in each web.xml, but since there are potentially many WARs and only ones that actually initialize the resource need to clean it up, it seems more natural to register for cleanup when the resource is initialized rather than duplicating a lot of XML configuration and then maybe cleaning up. (In this particular case, I'm initiating an orderly shutdown of a SQL connection pool. But I see this being useful in many other situations as well...) I'm sure there's some blisteringly obvious solution out there, but my Google-fu is failing me right now. Thanks!

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • clean html a string by element id using php

    - by user327140
    Hi, as you can see by the subject am looking for a tool for cleaning up a HTML string in php using a HTML id property, example: According to the following PHP string I wish to clean the HTML erasing the black11 $test = ' <div id="block1"> <div id="block11">Hello1 <span>more html here...</span></div> <div id="block12">Hello2 <span>more html here...</span></div> </div> <div id="block2"></div> '; Will became $test = ' <div id="block1"> <div id="block12">Hello2 <span>more html here...</span></div> </div> <div id="block2"></div> '; I already tried the tool from htmlpurifier.org and can't get the desirable result. Only thing I achieved was removing elements by tag; erasing id; erasing class. Is there any simple way to achieve this using purifier or other? Thanks in advance,

    Read the article

  • In .NET, Why Can I Access Private Members of a Class Instance within the Class?

    - by AMissico
    While cleaning some code today written by someone else, I changed the access modifier from Public to Private on a class variable/member/field. I expected a long list of compiler errors that I use to "refactor/rework/review" the code that used this variable. Imagine my surprise when I didn't get any errors. After reviewing, it turns out that another instance of the Class can access the private members of another instance declared within the Class. Totally unexcepted. Is this normal? I been coding in .NET since the beginning and never ran into this issue, nor read about it. I may have stumbled onto it before, but only "vaguely noticed" and move on. Can anyone explain this behavoir to me? I would like to know the "why" I can do this. Please explain, don't just tell me the rule. Am I doing something wrong? I found this behavior in both C# and VB.NET. The code seems to take advantage of the ability to access private variables. Sincerely, Totally Confused Class Jack Private _int As Integer End Class Class Foo Public Property Value() As Integer Get Return _int End Get Set(ByVal value As Integer) _int = value * 2 End Set End Property Private _int As Integer Private _foo As Foo Private _jack As Jack Private _fred As Fred Public Sub SetPrivate() _foo = New Foo _foo.Value = 4 'what you would expect to do because _int is private _foo._int = 3 'TOTALLY UNEXPECTED _jack = New Jack '_jack._int = 3 'expected compile error _fred = New Fred '_fred._int = 3 'expected compile error End Sub Private Class Fred Private _int As Integer End Class End Class

    Read the article

  • Designing secure consumer blackberry application

    - by Kiran Kuppa
    I am evaluating a requirement for a consumer blackberry application that places high premium on security of user's data. Seems like it is an insurance company. Here are my ideas on how I could go about it. I am sure this would be useful for others who are looking for similar stuff Force the user to use device password. (I am guessing that this would be possible - though not checked it yet). Application can request notifications when the device is about to be locked and just after it has been unlocked. Encryption of application specific data can be managed at those times. Application data would be encrypted with user's password. User's credentials would be encrypted with device password. Remote backup of the data could be done over HTTPS (any better ideas are appreciated) Questions: What if the user forgets his device password. If the user forgets his application password, what is the best and secure way to reset the password? If the user losses the phone, remote backup must be done and the application data must be cleaned up. I have some ideas on how to achieve (3) and shall share them. There must be an off-line verification of the user's identity and the administrator must provide a channel using which the user must be able to send command to the device to perform the wiping of application data. The idea is that the user is ALWAYS in control of his data. Without the user's consent, even the admin must not be able to do activities such as cleaning up the data. In the above scheme of things, it appears as if the user's password need not be sent over the air to server. Am I correct? Thanks, --Kiran Kumar

    Read the article

  • Publish failed using Ant publisher (Eclipse/datanucleus).

    - by aronp
    Dear All, I am being driven mad the following (apparently hard) error from eclipse. Publish failed using Ant publisher Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. I have seen comments on similar errors where refreshing eclipses view of the project helps but it is not helping me. Have tried cleaning the project, removing it from the webserver, deleting war files but cant seem to clear it. I have reset my TMPDIR variable so that it uses a directory on the same filesystem as that appeared to be another possible cause. The error occurs on classes which have been enhanced by datanuculeus. I have auto-enhance on the project. The other references to this problem indicate that it is due to Eclipses view of the project being out of step with the filesystem, and I am guessing that this has something to do with thedata nucleus enhancement. Any ideas? Thanks. I am using Eclipse 3.5.2 with latest datanucleus pluggins. Stack trace org.eclipse.core.runtime.CoreException: Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. at org.eclipse.jst.server.generic.core.internal.publishers.AbstractModuleAssembler.copyModule(AbstractModuleAssembler.java:172) at org.eclipse.jst.server.generic.core.internal.publishers.WarModuleAssembler.assemble(WarModuleAssembler.java:31) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.assembleModule(AntPublisher.java:167) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.publish(AntPublisher.java:128) at org.eclipse.jst.server.generic.core.internal.GenericServerBehaviour.publishModule(GenericServerBehaviour.java:82) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModule(ServerBehaviourDelegate.java:949) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModules(ServerBehaviourDelegate.java:1039) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:872) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Eclipse won't believe I have Maven 2.2.1

    - by Andrew Clegg
    I have a project (built from an AppFuse template) that requires Maven 2.2.1. So I upgraded to this (from 2.1.0) and set my path and my M2_HOME and MAVEN_HOME env variables. Then I ran mvn eclipse:eclipse and imported the project into Eclipse (Galileo). However, in the problems list for the project (and at the top of the pom.xml GUI editor) it says: Unable to build project '/export/people/clegg/data/GanymedeWorkspace/funcserve/pom.xml; it requires Maven version 2.2.1 (Please ignore 'GanymedeWorkspace', I really am using Galileo!) This persists whether I set Eclipse to use its Embedded Maven implementation, or the external 2.2.1 installation, in the Preferences - Maven - Installations dialog. I've tried closing and reopening the project, reindexing the repository, cleaning the project, restarting the IDE, logging out and back in again, everything I can think of! But Eclipse still won't believe I have Maven 2.2.1. I just did a plugin update so I have the latest version of Maven Integration for Eclipse -- 0.9.8.200905041414. Does anyone know how to convince Eclipse I really do have the right version of Maven? It's like it's recorded the previous version somewhere else and won't pay any attention to my changes :-( Many thanks! Andrew.

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • Modify post data with a custom MVC extension?

    - by Jaxidian
    So I'm looking into writing some custom MVC extensions and the first one I'm attempting to tackle is a FormattedTextBox to handle things such as currency, dates, and times. I have the rendering of it working perfectly, formatting it, working with strong types and everything all golden. However, the problem I'm now running into is cleaning up the formatted stuff when the page posts the data back. Take for example, a currency format. Let's use USD for these examples. When an object has a property as a decimal, the value would be 79.95. Your edit view would be something like: <%= Html.FormattedTextBox(model => Model.Person.HourlyWage, "{0:C}") %> This is all well and good for the GET request, but upon POST, the value is going to be $79.95, which when you assign to that decimal, gets unhappy very quickly and ends up shoving a 0 in there. So my question is, how do I get code working somewhere to work with that value before the MVC Framework goes and starts shoving it back into my ViewModel? I'd much rather this be done server-side than client-side. Thanks!!

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • When is someone else's code I use from the internet "mine"?

    - by robault
    I'm building a library from methods that I've found on the internet. Some are free to use or modify with no requirements, others say that if I leave a comment in the code it's okay to use, others say when I use the code I have to attribute the use of someone's code in my application (in the credits for my app I guess). What I've been doing is reorganizing classes, renaming methods, adding descriptions (code comments), renaming the parameters and names inside the methods to something meaningful, optimizing loops if applicable, changing return types, adding try/catch/throw blocks, adding parameter checks and cleaning up resources in the methods. For example; I didn't come up with the algorithm for blurring a Bitmap but I've taken the basic example of iterating through the pixels and turned it into a decent library method (applying the aforementioned modifications). I understand how to go about building it now myself but I didn't actually hit the keystrokes to make it and I couldn't have come up with it before learning from their example. What about code people get in answers on Stackoverflow or examples from Codeproject? At what point can I drop their requirements because at n% their code became mine? FWIW I intend on using the libraries to create products that I will sell.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • Creating content for rails-based applications

    - by Matthias Hryniszak
    Hi, I'm facing a problem of cleaning up my application in Ruby on Rails. What I have is a pretty standard 3-panel, header and footer layout where different parts of the screen contain different functionality. By that I mean for example that the header contains (among others) a select that allows one to select parts of the application and a context-dependent menu. The main content area contains obviously the most interactive stuff whereas side panels contain quick-links with stuff like shopping-cart preview, list of potentially attractive products for the customer, a selector to narrow down the list of options... I was wondering how do I go about simplifying the design. Right now I have the stuff that provides data for the "common" stuff (as opposed to direct content that's placed in the center) called from all the actions (with a filter) but that doesn't feel right for me. I've read that "components" are also not the way to go for obvious performance reasons. Is there something that's more like component-oriented (other frameworks do have that kind of stuff - Grails: <ui:include ../>, ASP.NET MVC: <% Html.RenderAction() %>)? Best regards, Matthias.

    Read the article

  • Loading a helper elsewhere than the autoload.php?

    - by drpcken
    I inherited a project and I'm cleaning up a bit and trying to finish it. I noticed that they used (or wrote) a breadcrumb helper. It is in my helpers folder and is named breadcrumb_helper.php It has a single function to build a breadcrumb menu with links and pass it to the view breadcrumbs.php. Here's the code: function show_breadcrumbs() { $ci =& get_instance(); $ci->load->helper('inflector'); $data = ''; //build breadcrumb and store in $data $this->load->view("breadcrumbs", $data) } I was trying to figure out how this helper worked and I checked the autoload.php but there is no reference to the helper in there. In face here is my autoload: $autoload['helper'] = array('url','asset','combine','navigation','form','portfolio','cookie','default'); This show_breadcrumbs() function is used quite a bit in some of my pages so I'm confused as to how its loading if it isn't in the autoloader. It is called like this in a few of my pages: <?=show_breadcrumbs()?> What am I missing? Why isn't this in my autoload? I even did a global search and couldn't find anywhere the helper is being loaded.

    Read the article

  • App no longer working - any ideas

    - by hamishmcn
    I am out of ideas as to why my app has suddenly stopped working - perhaps the collective mind of the SO community can help... Background: I have a large application that has been working up until recently. Now when ever I try and run it I get the error "The application failed to initialize properly (0xc0000005)" This happens before the app gets to _tmain(). It happens in both release and debug builds. I have tried cleaning and rebuilding the projects and rebooted my PC. The call stack just shows entries for kernel32.dll and ntdll.dll The output window shows: First-chance exception at 0x00532c13 in a.exe: 0xC0000005: Access violation reading location 0xabababdb. First-chance exception at 0x7c964ed1 in a.exe: 0xC0000005: Access violation. Unhandled exception at 0x7c964ed1 in a.exe: 0xC0000005: Access violation. Any ideas? Edit: Okay - found the problem - it was dll related my app uses shared dlls a.dll and b.dll (and others) a.dll hardly every changes (and uses b.dll) b.dll was changed by another developer this morning and a.dll was not rebuilt. Depends.exe did not show any missing dlls, however a.dll no longer works because of the change to b.dll

    Read the article

  • Why Does Private Access Remain Non-Private in .NET Within a Class?

    - by AMissico
    While cleaning some code today written by someone else, I changed the access modifier from Public to Private on a class variable/member/field. I expected a long list of compiler errors that I use to "refactor/rework/review" the code that used this variable. Imagine my surprise when I didn't get any errors. After reviewing, it turns out that another instance of the Class can access the private members of another instance declared within the Class. Totally unexcepted. Is this normal? I been coding in .NET since the beginning and never ran into this issue, nor read about it. I may have stumbled onto it before, but only "vaguely noticed" and move on. Can anyone explain this behavoir to me? Am I doing something wrong? I found this behavior in both C# and VB.NET. The code seems to take advantage of the ability to access private variables. Sincerely, Totally Confused Class Foo Private _int As Integer Private _foo As Foo Private _jack As Jack Private _fred As Fred Public Sub SetPrivate() _foo = New Foo _foo._int = 3 'TOTALLY UNEXPECTED _jack = New Jack '_jack._int = 3 'expected compile error because Foo doesn't know Jack _fred = New Fred '_fred._int = 3 'expected compile error because Fred hides from Foo End Sub Private Class Fred Private _int As Integer End Class End Class Class Jack Private _int As Integer End Class

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >