Search Results

Search found 3471 results on 139 pages for 'automated builds'.

Page 115/139 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • Javascript array of dates - not iterating properly (jquery ui datepicker)

    - by PaulB
    Hi I have some code which builds an array of date ranges. I then call a function, passing it a date, and compare that date with dates in the array. I'm doing it this way because the dates are stored in a cms, and I'm manipulating the JqueryUI datepicker. Unfortunately my code only checks the first date range in the array - and I can't figure out why! I think it's probably something simple (/stupid!) - if anyone can shed some light on it I'd be extremely grateful! The code is below - the june-september range works fine, the december to jan is totally ignored... <script type="text/javascript" language="javascript"> var ps1 = new Date(2010, 06-1, 18); var pe1 = new Date(2010, 09-1, 03); var ps2 = new Date(2010, 12-1, 20); var pe2 = new Date(2011, 01-1, 02); var peakStart = new Array(ps1,ps2); var peakEnd = new Array(pe1,pe2); function checkDay(date) { var day = date.getDay(); for (var i=0; i<peakStart.length; i++) { if ((date > peakStart[i]) && (date < peakEnd[i])) { return [(day == 5), '']; } else { return [(day == 1 || day == 5), '']; } } } </script>

    Read the article

  • Best source control system for maintaining different versions

    - by dalecooper
    Hi all! We need to be able to simultanously maintain a set of different versions of our system. I assume this is best done using branching. We currently use TFS2008 for source control, work items and automatic builds. What is the best version control solution for this task? Our organization is in the process of merging to TFS2010. Will TFS2010 give us the functionality we need to easily manage a series of branches per system version. We need to be able to keep each version isolated from the others, so that we can do testing deployment for each version. Our dev team consists of 5 .net developers and two flash developers. I have heard a lot of talk about GIT. Should we consider using GIT instead of TFS for source control? Is it possible to use TFS2010 together with GIT? Does anyone have similar setups that works nicely? Any sugggestions are appreciated! Thanks, Kjetil.

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • Which pdf elements could cause crashes?

    - by Felixyz
    This is a very general question but it's based on a specific problem. I've created a pdf reader app for the iPad and it works fine except for certain pdf pages which always crash the app. We now found out that the very same pages cause Safari to crash as well, so as I had started to suspect the problem is somewhere in Apple's pdf rendering code. From what I have been able to see, the crashing pages cause the rendering libraries to start allocating memory like mad until the app is killed. I have nothing else to help me pinpoint what triggers this process. It doesn't necessarily happen with the largest documents, or the ones with the most shapes. In fact, we haven't found any parameter that helps us predict which pages will crash and which not. Now we just discovered that running the pages through a consumer program that lets you merge docs gets rid of the problem, but I haven't been able to detect which attribute or element it is that is the key. Changing documents by hand is also not an option for us in the long run. We need to run an automated process on our server. I'm hoping someone with deeper knowledge about the pdf file format would be able to point me in a reasonable direction to look for document features that could cause this kind of behavior. All I've found so far is something about JBIG2 images, and I don't think we have any of those.

    Read the article

  • Undefined Web.config error in VS 2008

    - by user1066050
    I'm working on a web app using VS 2008, .Net 3.5 and C#. Most of the projects in the solution are either classic asp.net pages with some MVC 1 in the mix, the rest is shared libraries. The solution is one that is some 5 years old and has gone through a variety of developers working on it and clearly has some performance and architectural issues. Previously, I've been working on the project using VS 2008 on a Win XP machine, but have just transitioned over to a new box using Win 7 Ultimate. To do so, I've installed VS 2008, asp.net 3.5. To support future work on the solution I've also installed VS 2010 and asp.net 4.0. Opening the solution on the new box with VS 2008 works fine, and it builds without error. However, when I attempt to run it with the debugger, I get the following message: "There is an error in web.config. Please correct before proceeding. (You might rename the current web.config and add a new one.)" I think it's clear that there is some sort of environmental issue regarding web.config on the new machine, but the error message is not "helpful". Adding a new web.config is not an option as the existing one is quite long and involved (too much to post here). I'm hoping someone has a suggestion or two about where I might look for missing elements or changed configurations that might produce such an error message. Lacking that, I'll revisit this post and provide the web.config in the hope that will elicit further help. Thanks to all in advance for taking a look at this. The StackOverflow community has helped me many times in the past with pertinent answers although this is my first posting. Jeff

    Read the article

  • How to update maven local repository with newer artifacts from a remote repository?

    - by Richard
    My maven module A has a dependency on another maven module B provided by other people. When I run "mvn install" under A for the first time, maven downloads B-1.0.jar from a remote repository to my local maven repository. My module A builds fine. In the mean time, other people are deploying newer B-1.0.jar to the remote repository. When I run "mvn install" under A again, maven does not download the newer B-1.0.jar from the remote repository to my local repository. As a result, my module A build fails due to API changes in B-1.0.jar. I could manually delete B-1.0.jar from my local repository. Then maven would download the latest B-1.0.jar from the remote repository the next time when I run "mvn install". My question is how I can automatically let maven download the latest artifacts from a remote repository. I tried to set updatePolicy to "always". But that did not do the trick.

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • Switch gettext translated language with original language

    - by Ruben
    Hi everyone, I started my PHP application with all text in German, then used gettext to extract all strings and translate them to English. So, now I have a .po file with all msgids in German and msgstrs in English. I want to switch them, so that my source code contains the English as msgids. There are numerous reasons for this: More translators will know English, so it is only appropriate to serve them up a file with msgids in English. I could always switch the file before I give it out and after I receive it It would help me to write English object & function names and comments if the content text was also English. I'd like to do that, so the project is more open to other Open Source collaborators (more likely to know English than German). I could do this manually and this is the sort of task where I anticipate it will take me more time to write an automated routine for it (because I'm very bad with shell scripts) than do it by hand. But I also anticipate despising every minute of manual computer labour (feels like a oxymoron, right?) like I always do. Has someone done this before? I figured this would be a common problem, but couldn't find anything. Many thanks ahead. Sample Problem: <title><?=_('Routinen')?></title> #: /users/ruben/sites/v/routinen.php:43 msgid "Routinen" msgstr "Routines"

    Read the article

  • switching C compiler causes: Error Initializer cannot be specified for a flexible array member

    - by user1054210
    I am trying to convert our code from one IDE to be used in a different one. The current one uses gcc which allows for this structure to be initialized from a variable array. The new tool does not use gcc gives me an error "Initializer cannot be specified for a flexible array member". So can someone help me understand how to set this up? Should I set up a blank array of variable size and then somewhere assign the #define array as seen below? Here would be an example of the code…(this is current implementation current IDE) In one header file that is Build switchable so we can build this on different hardware platforms we have the following #define #define GPIOS \ /* BANK, PIN, SPD, MODE,… */ GPIOINIT( A, 0, 2, AIN, …) \ GPIOINIT( A, 1, 2, AIN, …) \ GPIOINTINIT(A, 2, 2, AIN, …) \ . . . Then in a different header file that is used in all builds we have PLATFORM_CONFIG_T g_platformConfig = { .name = {PLATFORM_NAME}, (bunch of other stuff), .allGpios = { GPIOS /* here I get the error */ }, }; So I am thinking I can make the error line a variable array and assign to it later in some other way? The problem is the actual array "GPIO" is of different types and pin orders on different designs are different.

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • How does Amazon's Statistically Improbable Phrases work?

    - by ??iu
    How does something like Statistically Improbable Phrases work? According to amazon: Amazon.com's Statistically Improbable Phrases, or "SIPs", are the most distinctive phrases in the text of books in the Search Inside!™ program. To identify SIPs, our computers scan the text of all books in the Search Inside! program. If they find a phrase that occurs a large number of times in a particular book relative to all Search Inside! books, that phrase is a SIP in that book. SIPs are not necessarily improbable within a particular book, but they are improbable relative to all books in Search Inside!. For example, most SIPs for a book on taxes are tax related. But because we display SIPs in order of their improbability score, the first SIPs will be on tax topics that this book mentions more often than other tax books. For works of fiction, SIPs tend to be distinctive word combinations that often hint at important plot elements. For instance, for Joel's first book, the SIPs are: leaky abstractions, antialiased text, own dog food, bug count, daily builds, bug database, software schedules One interesting complication is that these are phrases of either 2 or 3 words. This makes things a little more interesting because these phrases can overlap with or contain each other.

    Read the article

  • Why are compilers so stupid?

    - by martinus
    I always wonder why compilers can't figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit complex. For example, this code takes about 6 seconds on my computer to print the value zero (using java 1.6): int x = 0; for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) { x += x + x + x + x + x; } System.out.println(x); It is totally obvious that x is never changed so no matter how often you add 0 to itself it stays zero. So the compiler could in theory replace this with System.out.println(0). Or even better, this takes 23 seconds: public int slow() { String s = "x"; for (int i = 0; i < 100000; ++i) { s += "x"; } return 10; } First the compiler could notice that I am actually creating a string s of 100000 "x" so it could automatically use s StringBuilder instead, or even better directly replace it with the resulting string as it is always the same. Second, It does not recognize that I do not actually use the string at all, so the whole loop could be discarded! Why, after so much manpower is going into fast compilers, are they still so relatively dumb? EDIT: Of course these are stupid examples that should never be used anywhere. But whenever I have to rewrite a beautiful and very readable code into something unreadable so that the compiler is happy and produces fast code, I wonder why compilers or some other automated tool can't do this work for me.

    Read the article

  • If the dependency is a unique snapshot version and install is called, what does maven select?

    - by chrsk
    Imagine two projects. The first is the framework-core project which is in version 1.1.0 and has several snapshot builds. The other is the example-business project which has the following dependency to framework-core on the build-iteration number 9. <dependency> <groupId>org.example</groupId> <artifactId>framework-core</artifactId> <version>1.1.0-20100518.134928-9</version> </dependency> What happens if mvn install is called on the framework-core? I found out that the artifact is copied to the folder and is named to *.1.1.0-SNAPSHOT.jar (as expected). This lead me to the assumption that this version is only used if even this 1.1.0-SNAPSHOT version is defined as dependency and not the precise build. To test something local without deploying it to the maven repository: call mvn install, change the dependency to 1.1.0-SNAPSHOT -- and the artifact just installed is used? Or is it possible to overwrite the specific build (with using the install lifecycle phase)?

    Read the article

  • IntelliJ Doesn't Notice Changes in Interface

    - by yar
    [I've decided to give IntelliJ another go (to replace Eclipse), since its Groovy support is supposed to be the best. But back to Java...] I have an Interface that defines a constant public static final int CHANNEL_IN = 1; and about 20 classes in my Module that implement that interface. I've decided that this constant was a bad idea so I did what I do in Eclipse: I deleted the entire line. This should cause the Project tree to light up like a Christmas tree and all classes that implement that interface and use that constant to break. Instead, this is not happening. If I don't actually double-click on the relevant classes -- which I find using grep -- the module even builds correctly (using Build - Make Module). If I double-click on a relevant class, the error is shown both in the Project Tree and in the Editor. I am not able to replicate this behavior in small tests, but in large modules it works (incorrectly) this way. Is there some relevant setting in IntelliJ for this?

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Sizing issues while adding a .Net UserControl to a TabPage

    - by TJ_Fischer
    I have a complex Windows Forms GUI program that has a lot of automated control generation and manipulation. One thing that I need to be able to do is add a custom UserControl to a newly instatiated TabPage. However, when my code does this I get automatic resizing events that cause the formatting to get ugly. Without detailing all of the different Containers that could possibly be involved, the basic issue is this: At a certain point in the code I create a new tab page: TabPage tempTabPage = new TabPage("A New Tab Page"); Then I set it to a certain size that I want it to maintain: tempTabPage.Width = 1008; tempTabPage.Height = 621; Then I add it to a TabControl: tabControl.TabPages.Add(tempTabPage); Then I create a user control that I want to appear in the newly added TabPage: CustomView customView = new CustomView("A new custom control"); Here is where the problem comes in. At this point both the tempTabPage and the customView are the same size with no padding or margin and they are the size I want them to be. I now try to add this new custom UserControl to the tab page like this: tempTabPage.Controls.Add(customView); When making this call the customView and it's children controls get resized to be larger and so parts of the customView are hidden. Can anyone give me any direction on what to look for or what could be causing this kind of issue? Thanks ahead of time.

    Read the article

  • rabbitmq-erlang-client, using rebar friendly pkg, works on dev env fails on rebar release

    - by lfurrea
    I am successfully using the rebar-friendly package of rabbitmq-erlang-client for a simple Hello World rebarized and OTP "compliant" app and things work fine on the dev environment. I am able to fire up an erl console and do my application:start(helloworld). and connect to the broker, open up a channel and communicate to queues. However, then I proceed to do rebar generate and it builds up the release just fine, but when I try to fire up from the self contained release package then things suddenly explode. I know rebar releases are known to be an obscure art, but I would like to know what are my options as far as deployment for an app using the rabbitmq-erlang-client. Below you will find the output of the console on the crash: =INFO REPORT==== 18-Dec-2012::16:41:35 === application: session_record exited: {{{badmatch, {error, {'EXIT', {undef, [{amqp_connection_sup,start_link, [{amqp_params_network,<<"guest">>,<<"guest">>,<<"/">>, "127.0.0.1",5672,0,0,0,infinity,none, [#Fun<amqp_auth_mechanisms.plain.3>, #Fun<amqp_auth_mechanisms.amqplain.3>], [],[]}], []}, {supervisor2,do_start_child_i,3, [{file,"src/supervisor2.erl"},{line,391}]}, {supervisor2,handle_call,3, [{file,"src/supervisor2.erl"},{line,413}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,588}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}}}}, [{amqp_connection,start,1, [{file,"src/amqp_connection.erl"},{line,164}]}, {hello_qp,start_link,0,[{file,"src/hello_qp.erl"},{line,10}]}, {session_record_sup,init,1, [{file,"src/session_record_sup.erl"},{line,55}]}, {supervisor_bridge,init,1, [{file,"supervisor_bridge.erl"},{line,79}]}, {gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}, {session_record_app,start,[normal,[]]}} type: permanent

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • check properties of two objects for changes

    - by k-hoffmann
    Hi, i have to develop a mechanism to check two object properties for changes. All properties which are needed to check are marked with an attribute. Atm i - read all properties from acutal object via linq - read the corresponding property from old object - fill an own object with the two properties (old and new value) In Code the call to the workerclass looks like this public void CreateHistoryMap(BaseEntity actual, BaseEntity old) { CreateHistoryMap(actualEntity, oldEntity) .ForEach(mapEntry => CreateHistoryEntry(mapEntry), mapEntry => IfChangesDetected(mapEntry)); } CreateHistoryMap builds up the HistoryMapEntry which contains the two properties. CreateHistoryEntry build up the object which is saved to database, the IfChangesDetected check the object for changes. I have to handle own special application types to generate history values to database (like concatinating list values and so on). My problem is now, that i have to read the values of the properties twice - for change detection - and for the concreate CreateHistoryEntry How can i eliminate this problem or how can i implement the change tracking scenario with the nice c# 3.5 features? Thanks a lot.

    Read the article

  • How to return ArrayList results from an IntentService

    - by gcl1
    I have an IntentService that loads up an ArrayList with data from a network source (AWS SDB tables). The ArrayList is in a global space -- accessible to both the calling Activity and the IntentService (like this: appState = ((App)getApplicationContext())). When the IntentService is done, it notifies the Activity through a ResultReceiver, and the Activity calls adapter.notifyDataChanged() to update the ListView. This solution works most of the time, ... but it violates the rule that only the UI thread should make changes to data underlying a ListView. So as it is, I sometimes get an error: "The content of the adapter has changed but ListView did not receive a notification." I think this must be a common situation. Please let me know if you have any suggestions or best practices for this problem. Here are three options I'm aware of: Keep the IntentService, and have it store the results in another "working" ArrayList, also in the global space. When the result is ready, the IntentService calls the ResultReceiver (on the UI thread), which can then: a) copy the result to the ArrayList associated with the ListView, and b) call adapter.notifyDataChanged(). CONS: I don't like the idea of putting temp/working data in a global space, and copying the result list seems inefficient. Keep the IntentService, and have it pass the results back through a bundle loaded with a ParcelableArrayList. CONS: I'm not sure if this approach would scale for very large result sets. It also requires copying the result list. Switch to a Service which builds a local copy of the result list. Have the Activity directly access the address space of the Service in order to read the result list. CON: Still requires copying results to the ArrayList associated with the ListView. Thank you.

    Read the article

  • Does XAML work with file links in Visual Studio?

    - by Tim
    I'm adding a new WPF project to an existing Visual Studio solution and would like to reuse a bunch of code (C# and xaml) from an existing project within the solution. I've created the new project and added existing files as follows: Right click project Add - Add Existing Item Find the file to reuse, use the arrow next to "Add" and "Add as Link" I now have a nice project set up with all the proper links. However, XAML chokes on these links. For example: <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Resources\Elements\Buttons\Buttons.xaml" /> <ResourceDictionary Source="Resources\Elements\TextBox\TextBox.xaml" /> </ResourceDictionary.MergedDictionaries> The files "Buttons.xaml" and "TextBox.xaml" exist as links in my new project. The project builds, but when I run, I get the following XamlParseException: 'Resources\Elements\Buttons\Buttons.xaml' value cannot be assigned to property 'Source' of object 'System.Windows.ResourceDictionary'. Cannot locate resource 'resources/elements/buttons/buttons.xaml'. It seems like the XAML parser is requiring an actual copy of these XAML files to exist in my new project, instead of links. This is exactly what I'm trying to avoid. I want my project to share these files so that any changes get transferred to the other project without hunting and copying. Any insight is appreciated!

    Read the article

  • how to achieve this single line css format....

    - by metal-gear-solid
    I tried many formats but this is best to find classes and IDs easily without using "Find". but it is good if width of editor is wide. I use this tool to format my css in single line http://www.newmediacampaigns.com/files/posts/css-formatting/clean.php #wrapper {width:800px; margin:0 auto;} #header {height:100px; position:relative;} #feature .post {width:490px; float:left;} #feature .link {width:195px; display:inline; margin:0 10px 0 0;float:right;border-bottom:1px solid red} #footer {clear:both; font-size:93%; float:none;} I need like this. see selector #feature .link i need if any selector has more than 3 properties then it should go at next line like in this example.because i work in cms where width of textbox where i write cssis not much wide. Is there any tool/software can format CSS like this. or after getting single line format can we this formatting for big lines. in automated way #wrapper {width:800px; margin:0 auto;} #header {height:100px; position:relative;} #feature .post {width:490px; float:left;} #feature .link {width:195px; display:inline; margin:0 10px 0 0; float:right;bottom:1px solid red} #footer {clear:both; font-size:93%; float:none;}

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >