Search Results

Search found 14416 results on 577 pages for 'business logic'.

Page 549/577 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • JNDI / Classpath problem in glassfish

    - by Michael Borgwardt
    I am in the process of converting a large J2EE app from EJB 2 to EJB 3 (all stateless session beans, using glassfish 2.1.1), and running out of ideas. The first EJB I converted (let's call it Foo) ran without major problems (it was the only one in its ejb-module and I could completely replace the deployment descriptor with annotations) and the app ran fine. But after converting the second one (let's call it Bar, one of several in a different ejb-module) there is a weird combination of problems: The app deploys without errors (nothing in the logs either) There is an error when looking up Bar via JNDI When looking at the JNDI tree in the glassfish admin console, Bar is not present at all. Then when I look at the logs, I see this (Foo is the correct name of the EJB's remote interface of the first converted, previously working EJB): Caused by: javax.naming.NamingException: ejb ref resolution error for remote business interface Foo [Root exception is java.lang.ClassNotFoundException: Foo] at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425) at com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:414) at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:603) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more Caused by: java.lang.ClassNotFoundException: XXX at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1578) at com.sun.ejb.EJBUtils.getBusinessIntfClassLoader(EJBUtils.java:679) at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:348) ... 75 more This is followed by more exceptions for all the entries that (like Foo) do appear in the JNDI tree. These look like this: Caused by: javax.naming.NotContextException: BarHome cannot be listed at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:607) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more However, no exception for Bar, it does not appear in the log at all except one entry during deployment. The other EJBs in the same module do appear, as does Foo. Any ideas what could cause this or how to diagnose it further? The beans are pretty straightforward: @Stateless(name = "Foo") @RolesAllowed("FOOUSER") @TransactionAttribute(TransactionAttributeType.SUPPORTS) public class FooImpl extends BaseBean implements Foo { I'm also having some problems with the deployment descriptor for Bar (I'd like to eliminate it, but glassfish doesn't seem to like having a bean appear only in sun-ejb-jar.xml, or having some beans in a module declared in the descriptor and others use only annotations), but I can't see how that could cause the ClassNotFoundException on Foo. Is there a way to see the ClassPath that Glassfish is actually using?

    Read the article

  • AccessViolationException thrown

    - by user255371
    Hi, I’m working on a project that’s written in C# and uses a C++ dll to communicate with a robot. Originally the software was written in C# VS 2003 and was converted to VS 2008 (no change to the code) using .Net 2.0. Now, I started seeing the “Attempted to read or write protected memory…” on some computers. The Access violation error is always thrown when the code calls a particular method from the dll, however, that very same method is called over and over throughout the task and executes fine, just sometimes it throws the error. Also, the robot seems to execute the command fine which tells me that the values passed into the dll exist and thus are accessible. The software with the .Net 1.1 has been used for years and worked fine without ever throwing any memory errors. Now that it has been using .Net 2.0 it throws errors on some computers only. I’m not sure what’s causing the issue. I ruled out inappropriate calling (incorrect marshalling …) of the dll methods as it has been working fine with .Net 1.1 for years and thus should work fine in .Net 2.0 as well. I’ve seen some posts suggesting that it could be the GC, but then again why would it only happen on this one computer and only sometimes. Also, the values passed in are all global variables in the C# code and thus they should exist until the application is shut down and GC has no business moving any of those around or deleting them. Another observation, as I mentioned above, the robot executes the command normally which means that it gets all its necessary values. Not sure what the C++ dll’s method would do at the end where the GC could mess up stuff. It shouldn’t try to delete the global variables passed in and the method is not modifying those variables either (I’m not expecting any return values through the passed in values, the only return value is the method return which again shouldn’t have anything to do with GC.) One important piece of information I should add is that I have no access to the C++ code and thus cannot make any changes there. The fix has to come through the C# code or some settings on the computer or something else that I am in control of. Any help greatly appreciated. Thanks. Code snippet: Original method call in VS 2003 [DllImport("TOOLB32.dll",EntryPoint="TbxMoveWash")] public static extern int TbxMoveWash(int tArmId, string lpszCarrierRackId, int eZSelect, int[] lpTipSet, int tVol, bool bFastW); which I modified after seeing the error to the following (but the error still occurs): [DllImport("TOOLB32.dll",EntryPoint="TbxMoveWash")] public static extern int TbxMoveWash(int tArmId, string lpszCarrierRackId, int eZSelect, [MarshalAs(UnmanagedType.LPArray, SizeConst = 8)] int[] lpTipSet, int tVol, bool bFastW);

    Read the article

  • Haskell: how to get through 'no instance for' ?

    - by artemave
    I am learning Haskell. I am on the 8th chapter of this book. The main thing I've learned so far is that Haskell is very unfriendly to me and it bites my ass where possible. Moreover... Heck! Enough mourning, to business. Here is the code: module GlobRegex ( globToRegex, matchesGlob ) where import Text.Regex.Posix import Text.Regex.Posix.String import Text.Regex.Base.RegexLike data CaseOpt = Case | NoCase deriving (Eq) matchesGlob :: String -> String -> CaseOpt -> Bool matchesGlob name pat caseopt = match regex name where regex = case caseopt of NoCase -> makeRegexOpts (defaultCompOpt + compIgnoreCase) defaultExecOpt (globToRegex pat) Case -> makeRegex (globToRegex pat) globToRegex :: String -> String ... And here is how it fails to compile: Prelude Text.Regex.Posix Text.Regex.Base.RegexLike> :load globtoregex\GlobRegex. hs [1 of 1] Compiling GlobRegex ( globtoregex\GlobRegex.hs, interpreted ) globtoregex\GlobRegex.hs:14:31: No instance for (RegexLike regex [Char]) arising from a use of `match' at globtoregex\GlobRegex.hs:14:31-46 Possible fix: add an instance declaration for (RegexLike regex [Char]) In the expression: match regex name In the definition of `matchesGlob': matchesGlob name pat caseopt = match regex name where regex = case caseopt of { NoCase -> makeRegexOpts (defaultCompOpt + compIgnoreCase) defaultExecOpt (globToRegex pat) Case -> makeRegex (globToRegex pat) } globtoregex\GlobRegex.hs:17:23: No instance for (RegexMaker regex CompOption execOpt String) arising from a use of `makeRegex' at globtoregex\GlobRegex.hs:17:23-49 Possible fix: add an instance declaration for (RegexMaker regex CompOption execOpt String) In the expression: makeRegex (globToRegex pat) In a case alternative: Case -> makeRegex (globToRegex pat) In the expression: case caseopt of { NoCase -> makeRegexOpts (defaultCompOpt + compIgnoreCase) defaultExecOpt (globToRegex p at) Case -> makeRegex (globToRegex pat) } Failed, modules loaded: none. To my best understanding, Text.Regex.Posix.String provides instances for RegexLike Regex String and RegexMaker Regex CompOption ExecOption String, so it should work. On the other hand, I can see that regex in the error message is type variable, not a concrete type, so, perhaps not... Anyway, this is where I am stuck. May be there is a common pattern for resolving no instance for type of problems? Or, in Haskell terms, instance of SmartGuess typeclass for no instance for?

    Read the article

  • Spring <jee:remote-slsb> and JBoss AS7 - No EJB receiver available for handling

    - by Lech Glowiak
    I have got @Remote EJB on JBoss AS 7, available by name java:global/RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom. Standalone client is Spring application that uses <jee:remote-slsb> bean. When trying to use that bean I get java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:, moduleName:RandomEjb, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@1a89031. Here is relevant part of applicationContext.xml: <jee:remote-slsb id="remoteRandom" jndi-name="RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom" business-interface="pl.lechglowiak.ejbTest.RemoteRandom" <jee:environment> java.naming.factory.initial=org.jboss.naming.remote.client.InitialContextFactory java.naming.provider.url=remote://localhost:4447 jboss.naming.client.ejb.context=true java.naming.security.principal=testuser java.naming.security.credentials=testpassword </jee:environment> </jee:remote-slsb> <bean id="remoteClient" class="pl.lechglowiak.RemoteClient"> <property name="remote" ref="remoteRandom" /> </bean> RemoteClient.java public class RemoteClient { private RemoteRandom random; public void setRemote(RemoteRandom random){ this.random = random; } public Integer callRandom(){ try { return random.getRandom(100); } catch (Exception e) { e.printStackTrace(); return null; } } } My jboss client jar: org.jboss.as jboss-as-ejb-client-bom 7.1.2.Final pom pl.lechglowiak.ejbTest.RemoteRandom is available for client application classpath. jndi.properties contains exact properties as in <jee:environment> of <jee:remote-slsb>. Such code runs without exception: Context ctx2 = new InitialContext(); RemoteRandom rr = (RemoteRandom) ctx2.lookup("RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom"); System.out.println(rr.getRandom(10000)); But this: ApplicationContext ctx = new ClassPathXmlApplicationContext("applicationContext.xml"); RemoteClient client = ctx.getBean("remoteClient", RemoteClient.class); System.out.println(client.callRandom()); ends with exception: java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:, moduleName:RandomEjb, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@1a89031. jboss.naming.client.ejb.context=true is set. Do you have any idea what am I setting wrong in <jee:remote-slsb>?

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • SCORM 2004 Sequencing: What Am I doing wrong?

    - by Van
    This quiz is the last SCO in a grouping of 4 SCOs. SCO 1,2,3 have to be completed before this quiz becomes available. The problem is that when 1,2,3 are completed the menu skips right over this quiz and goes to the first page in the next module. This quiz stats grayed out the entire time. I think it has to do with the precondition logic or the objectives but I've tried everything I can think of and nothing works. <item identifier="quiz1_100" identifierref="res-quiz1" isvisible="true"> <title>Quiz 1</title> <imsss:sequencing> <imsss:controlMode choice="true" choiceExit="false" flow="true" forwardOnly="false" useCurrentAttemptObjectiveInfo="false" useCurrentAttemptProgressInfo="false" /> <imsss:sequencingRules> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="disabled" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="any"> <imsss:ruleCondition referencedObjective="obj_1000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_2000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="obj_3000_VHKP_test" operator="not" condition="objectiveStatusKnown" /> <imsss:ruleCondition referencedObjective="quiz_primary" operator="not" condition="objectiveStatusKnown" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> <imsss:preConditionRule> <imsss:ruleConditions conditionCombination="all"> <imsss:ruleCondition condition="completed" /> </imsss:ruleConditions> <imsss:ruleAction action="skip" /> </imsss:preConditionRule> </imsss:sequencingRules> <imsss:objectives> <imsss:primaryObjective objectiveID="quiz_primary" satisfiedByMeasure="true"> <imsss:minNormalizedMeasure>0.8</imsss:minNormalizedMeasure> <imsss:mapInfo targetObjectiveID="quiz_complete" writeNormalizedMeasure="true" writeSatisfiedStatus="true" /> </imsss:primaryObjective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_1000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_1000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_2000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_2000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <imsss:objective satisfiedByMeasure="false" objectiveID="obj_3000_VHKP_test"> <imsss:mapInfo targetObjectiveID="gObj_3000_VHKP" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> <!-- <imsss:objective satisfiedByMeasure="false" objectiveID="obj_quiz1"> <imsss:mapInfo targetObjectiveID="quiz_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> --> <imsss:objective satisfiedByMeasure="false" objectiveID="course_complete"> <imsss:mapInfo targetObjectiveID="obj_EJBOWNADV_primary" readSatisfiedStatus="true" readNormalizedMeasure="false" /> </imsss:objective> </imsss:objectives> <imsss:deliveryControls tracked="true" completionSetByContent="true" objectiveSetByContent="false" /> </imsss:sequencing> </item>

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • WFP: How do you properly Bind a DependencyProperty to the GUI

    - by Robert Ross
    I have the following class (abreviated for simplicity). The app it multi-threaded so the Set and Get are a bit more complicated but should be ok. namespace News.RSS { public class FeedEngine : DependencyObject { public static readonly DependencyProperty _processing = DependencyProperty.Register("Processing", typeof(bool), typeof(FeedEngine), new FrameworkPropertyMetadata(true, FrameworkPropertyMetadataOptions.AffectsRender)); public bool Processing { get { return (bool)this.Dispatcher.Invoke( DispatcherPriority.Normal, (DispatcherOperationCallback)delegate { return GetValue(_processing); }, Processing); } set { this.Dispatcher.BeginInvoke(DispatcherPriority.Normal, (SendOrPostCallback)delegate { SetValue(_processing, value); }, value); } } public void Poll() { while (Running) { Processing = true; //Do my work to read the data feed from remote source Processing = false; Thread.Sleep(PollRate); } // } } } Next I have my main form as the following: <Window x:Class="News.Main" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:converter="clr-namespace:News.Converters" xmlns:local="clr-namespace:News.Lookup" xmlns:rss="clr-namespace:News.RSS" Title="News" Height="521" Width="927" Initialized="Window_Initialized" Closing="Window_Closing" > <Window.Resources> <ResourceDictionary> <converter:BooleanConverter x:Key="boolConverter" /> <converter:ArithmeticConverter x:Key="arithConverter" /> ... </ResourceDictionary> </Window.Resources> <DockPanel Name="dockPanel1" SnapsToDevicePixels="False" > <ToolBarPanel Height="37" Name="toolBarPanel" Orientation="Horizontal" DockPanel.Dock="Top" > <ToolBarPanel.Children> <Button DataContext="{DynamicResource FeedEngine}" HorizontalAlignment="Right" Name="btnSearch" ToolTip="Search" Click="btnSearch_Click" IsEnabled="{Binding Path=Processing, Converter={StaticResource boolConverter}}"> <Image Width="32" Height="32" Name="imgSearch" Source="{Resx ResxName=News.Properties.Resources, Key=Search}" /> </Button> ... </DockPanel> </Window> As you can see I set the DataContext to FeedEngine and Bind IsEnabled to Processing. I have also tested the boolConverter separately and it functions (just applies ! (Not) to a bool). Here is my Main window code behind in case it helps to debug. namespace News { /// <summary> /// Interaction logic for Main.xaml /// </summary> public partial class Main : Window { public FeedEngine _engine; List<NewsItemControl> _newsItems = new List<NewsItemControl>(); Thread _pollingThread; public Main() { InitializeComponent(); this.Show(); } private void Window_Initialized(object sender, EventArgs e) { // Load current Feed data. _engine = new FeedEngine(); ThreadStart start = new ThreadStart(_engine.Poll); _pollingThread = new Thread(start); _pollingThread.Start(); } } } Hope someone can see where I missed a step. Thanks.

    Read the article

  • Does anyone have database, programming language/framework suggestions for a GUI point of sale system

    - by Jason Down
    Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc. After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things: Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder). Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them. Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling. Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes. So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation. To summarize we are looking to do the following: Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.) Deliver a solution that is easily maintainable and supportable. A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC). Suggestions would be appreciated. Thanks [UPDATE] I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).

    Read the article

  • Mysql - Help me change this single complex query to use temporary tables

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query Can anybody help me suggest an approach using temporary tables. We have indexed all the relevant fields and it looks like this is the least time possible with this approach:- SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%Dictatorship%" )) AS key_1_total_matches , SUM(DISTINCT( t.tag LIKE "%democracy%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%democracy%" )) AS key_2_total_matches , COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published' , CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN tags AS t ON t.id_tag = ttagrels.id_tag LEFT JOIN tags AS tt ON tt.id_tag = lptagrels.id_tag LEFT JOIN tags AS ttt ON ttt.id_tag = wtagrels.id_tag WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( tt.id_tag = lptagrels.id_tag ) AND ( lp.id_lp > 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( ttt.id_tag = wtagrels.id_tag ) AND ( wc.id_wc > 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od > 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc > 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%" OR tt.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%democracy%" ) GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 10 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. Somebody suggested in my previous question to do the following:- create a temporary table and insert here all relevant data that might end up in the final result set run several updates on this table, joining the required tables one at a time instead of all of them at the same time finally perform a query on this temporary table to extract the end result All this was done in a stored procedure, the end result has passed unit tests, and is blazing fast. I have never worked with temporary tables till now. Only if I could get some hints, kind of schematic representations so that I can start with... Is there something faulty with the query? What can be the reason behind 10+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query_improved.jpg

    Read the article

  • Managing a difficult manager

    - by griegs
    I have a situation here at work. We are redeveloping our basic architecture across the entire company. Currently we have the following hierarchy; SQL Database <= Stored Procs not allowed. nHibernate Classes to convert nHibernate into our own objects Web Service <= for all external and [internal] calls. Class to take objects from Web Service and back into our own objects and then… Normal nTier application architecture such as Data Transformation Layer, Business layer etc. Within the database, when we are writing a hierarchy of objects to the database, say for example; Order Person Details Address Product Other We need to serialise the object and save it, in its entirety, to an image field in a table. No attempt has been made to store the objects in their own tables so that we can do useful stuff like report on it. This is an architecture that was implemented [way] before I started and as you can probably appreciate, is a complete nightmare not to mention slow as a wet weekend. We’re not even allowed to have stored procs within SQL server because in my boss’s last job they had a hundred or so and he had a problem identifying them all so therefore all stored procs are the devil. Now the same person that developed the above architecture has developed the new one. It came as no surprise that he’s essentially used the same framework only now it’s using DotNet 3.5 with interfaces and generics. We still have to go through web services, still need to serialise (everything), still not allowed to use stored procs etc. In fact, we’re only barely able to bang two rocks together here. He says to us that the framework is open for discussion but when you discuss it, unless you approve of his design, you are told flatly “No”. He simply won’t listen to any other suggestions. Even when you show him demo applications of his proposed architecture v’s yours and he can see the speed difference, he still won’t take that on board. So I guess my question is, and I know others have experienced the same things out there, how do I get through to someone like this? How do you convince someone to ditch Web Services for internal calls and applications? How do you demonstrate, and make it stick, that stored procs are a better way to go than ad-hoc sql statements? This is killing me. I don’t want to repeat the mistakes of the past and I certainly don’t want to write code that I know is going to be slow and cumbersome. Help!

    Read the article

  • Implementing EAV pattern with Hibernate for User -> Settings relationship

    - by Trevor
    I'm trying to setup a simple EAV pattern in my web app using Java/Spring MVC and Hibernate. I can't seem to figure out the magic behind the hibernate XML setup for this scenario. My database table "SETUP" has three columns: user_id (FK) setup_item setup_value The database composite key is made up of user_id | setup_item Here's the Setup.java class: public class Setup implements CommonFormElements, Serializable { private Map data = new HashMap(); private String saveAction; private Integer speciesNamingList; private User user; Logger log = LoggerFactory.getLogger(Setup.class); public String getSaveAction() { return saveAction; } public void setSaveAction(String action) { this.saveAction = action; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public Integer getSpeciesNamingList() { return speciesNamingList; } public void setSpeciesNamingList(Integer speciesNamingList) { this.speciesNamingList = speciesNamingList; } public Map getData() { return data; } public void setData(Map data) { this.data = data; } } My problem with the Hibernate setup, is that I can't seem to figure out how to map out the fact that a foreign key and the key of a map will construct the composite key of the table... this is due to a lack of experience using Hibernate. Here's my initial attempt at getting this to work: <composite-id> <key-many-to-one foreign-key="id" name="user" column="user_id" class="Business.User"> <meta attribute="use-in-equals">true</meta> </key-many-to-one> </composite-id> <map lazy="false" name="data" table="setup"> <key column="user_id" property-ref="user"/> <composite-map-key class="Command.Setup"> <key-property name="data" column="setup_item" type="string"/> </composite-map-key> <element column="setup_value" not-null="true" type="string"/> </map> Any insight into how to properly map this common scenario would be most appreciated!

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • How to eager load sibling data using LINQ to SQL?

    - by Scott
    The goal is to issue the fewest queries to SQL Server using LINQ to SQL without using anonymous types. The return type for the method will need to be IList<Child1>. The relationships are as follows: Parent Child1 Child2 Grandchild1 Parent Child1 is a one-to-many relationship Child1 Grandchild1 is a one-to-n relationship (where n is zero to infinity) Parent Child2 is a one-to-n relationship (where n is zero to infinity) I am able to eager load the Parent, Child1 and Grandchild1 data resulting in one query to SQL Server. This query with load options eager loads all of the data, except the sibling data (Child2): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; I need to load the sibling data as well. One approach I have tried is splitting the query into two LINQ to SQL queries and merging the result sets together (not pretty), however upon accessing the sibling data it is lazy loaded anyway. Adding the sibling load option will issue a query to SQL Server for each Grandchild1 and Child2 record (which is exactly what I am trying to avoid): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); loadOptions.LoadWith<Parent>(o => o.Child2List); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=1 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=2 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=3 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=4 I've also written LINQ to SQL queries to join in all of the data in hopes that it would eager load the data, however when the LINQ to SQL EntitySet of Child2 or Grandchild1 are accessed it lazy loads the data. The reason for returning the IList<Child1> is to hydrate business objects. My thoughts are I am either: Approaching this problem the wrong way. Have the option of calling a stored procedure? My organization should not be using LINQ to SQL as an ORM? Any help is greatly appreciated. Thank you, -Scott

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • UpdateProgress with UpdatePanel not showing up in User control when page is loading

    - by Carter
    Is this typical behavior of the UpdateProgress for an ASP.Net UpdatePanel? I have an update panel with the UpdateProgress control inside of a user control window on a page. If I then make the page in the background do some loading and click a button in the user control update panel the UpdateProgress does not show up at all. It's like the UpdatePanels refresh request is not even registered until after the actual page is done doing it's business. It's worth noting that it will show up if nothing is happening in the background. The functionality I want is what you would expect. I want to loader to show up if it has to wait for anything to get it's refresh done when after the button is clicked. I know I can get this functionality if I just use jquery ajax with a static web method, but you can't have static web methods inside of a user control. I could have it in the page but it really doesn't belong there. A full-blown wcf wouldn't really be worth it in this case either. I'm trying to compromise with an UpdatePanel but these things always seem to cause me some kind of trouble. Maybe this is just the way it works? Edit:So I'll clarify a bit what I'm doing. What's happening is I have a page and all it has on it are some tools on the side and a big map. When the page initially loads it takes some time to load the map. Now if while it's loading I open up the tool (a user control) that has the update panel in question in it and click the button on this user control that should refresh the update panel with new data and show the loading sign (in the updateprogress) then the UpdateProgress loading image does not show up. However, the code run by the button click does run after the page is done loading (as expected) and The UpdateProgress will show up if nothing on the page containing the user control is loading. I just want the loader to show up while the page is loading. I thought my problem was that perhaps the map loading is in an update panel and my UpdateProgress was only being associated with the update panel for the user control's update panel. Hence, I would get no loading icon when the map was loading. This is not the case though.

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • Winform radiobutton data binding

    - by Rajarshi
    I am following the "Presentation Model" design pattern suggested by Martin Fowler for my GUI architecture in a Windows Forms project. "The essence of a Presentation Model is of a fully self-contained class that represents all the data and behavior of the UI window, but without any of the controls used to render that UI on the screen. A view then simply projects the state of the presentation model onto the glass...." - Martin Fowler Read more about this pattern at www.martinfowler.com/eaaDev/PresentationModel.html I am finding the concept very fluid and easy to understand except this one issue of data binding RadioButtons to properties on the Data/Domain object. Suposing I have a Windows Form with 3 radio buttons to depict some "Mode" options as - Auto Manual Import How can I use boolean properties on Data/Domain Objects to DataBind to these buttons? I have tried many ways but to no avail. For example I would like to code like - rbtnAutoMode.DataBindings.Add("Text", myBusinessObject, "IsAutoMode"); rbtnManualMode.DataBindings.Add("Text", myBusinessObject, "IsManualMode"); rbtnImportMode.DataBindings.Add("Text", myBusinessObject, "IsImportMode"); There should be a fourth property like "SelectedMode" on the data/domain object which at the end should depict a single value like "SelectedMode = Auto". I am trying to update this property when any of the "IsAutoMode", "IsManualMode" or "IsImportMode" is changed, e.g. through the property setters. I have INotifyPropertyChanged implemented on my data/domain object so, updating any data/domain object property automatically updates my UI controls, that's not an issue. There is a good example of binding 2 radio buttons here - http://stackoverflow.com/questions/344964/how-do-i-use-databinding-with-windows-forms-radio-buttons but I am missing the link while implementing the same with 3 buttons. I am having very erratic behaviors for the Radio Buttons. I hope I was able to explain reasonably. I am actually in a hurry and could not put a detailed code on post, but any help in this regard is appreciated. There is a simple solution to this issue by exposing a method like - public void SetMode(Modes mode) { this._selectedMode = mode; } which could be called from the "CheckedChanged" event of the Radio Buttons from the UI and would perfectly set the "SelectedMode" on the business object, but I need to stretch the limits to verify whether this can be done by DataBinding.

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • Gathering Staff anyone interested?

    - by kasene
    Thread Title - Gathering Staff Rush-Soft Game Design is currently looking for staff of a moderate skill level. Team Name - RushSoft Game Design Project Name - N/A We are gathering staff so that we can begin working on a new game. Target Aim - Freeware / Free Version - Paid Version With our first project our aim is to simply get our name out there. Generally we will be targeting a freeware distribution platform or a Free and Paid version. Compensation - Prehaps in the future but don't rely on it If in the future we start developing a game we intend to make any sort of sizable profit from then yes, there will be compensation however currently our low, low funding comes from generous donations. Any money that we make for now will go to the teams funding for things like engine licenses and company registration. Technology - C/C++ RSETech Our primary functional language will be C/C++ as most games are. We will be using a custom built library built on Direct3D called RSETech or RushSoft Engine Technology. Currently its is fully capable of being used for developing a game. The final version is made up of almost entirely C (No C++ or OOP). There is a C++ version currently in the works. Programming: - Microsoft Visual C++ 2008 / 2010 2D Art - Photoshop CS2 - GIMP Talent Needed - We currently are in need of x2 Programmers - With understanding of the following C/C ++ and game programming aspects: -If/Else Conditions -Functions/Methods -Arrays -Pointers (You don't need to fully understand these. Just know when they need to be used.) -Enums -Loops (For and While) -Structs (and How to use . and - syntax) -Classes (and how to call methods and access variables from a class) -State Machines -Switches -Include Guards -Understanding of how game loops work in general. (Init, Update, Render, Deinit) x2 Artists - As long as you have the means to and are able to draw 2D sprites and collab with a game designer to get a good result. 1 or more Game Designers - You can design levels (for platformers) as well as write game scripts and you can come up with good ideas and game mechanics. As long as you can do these things and are able to work well with artists and programmers you're golden. Business Consultant - Someone who knows the industry and how it works. Will inquire about possible distribution platforms as well as contact other developers, websites, and publishers on RushSofts behalf. Team Structure - Kasene Clark - Co-Founder/Lead Programmer/Game Designer Casey W - Co-Founder/Artist(GC/Animation)/Game Designer Nathan Mayworm - Game Designer. Website - RushSoft Websitek Contact - Kasene Clark [email protected] - [email protected] Phone - 12075181967 Feedback - Any Thank You! -Kasene

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • paypal ipn case is not calling in paypal "pay now"

    - by Ipsita Rout
    paypal ipn case is not calling in paypal "pay now". In paypal button the following is set. return : "http://localhost/paypal.php?ch=return" cancel_return : "cancel_return" value="http://localhost/paypal.php?ch=cancel" notify_url : value="http://localhost/paypal.php?ch=ipn" paypal_form.php <form name="paypal" action="https://www.sandbox.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="rm" value="2"> <input type="hidden" name="no_notes" value="1"> <input type="hidden" name="custom" value="1"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="return" id="return" value="http://localhost/paypal.php?ch=return"> <input type="hidden" name="cancel_return" id="cancel_return" value="http://localhost/paypal.php?ch=cancel"> <input type="hidden" name="notify_url" id="notify_url" value="http://localhost/paypal.php?ch=ipn"> <input type="image" src="https://www.sandbox.paypal.com/WEBSCR-640-20110429-1/en_GB/i/btn/btn_paynow_SM.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online."> <img alt="" border="0" src="https://www.sandbox.paypal.com/WEBSCR-640-20110429-1/en_GB/i/scr/pixel.gif" width="1" height="1"> paypal.php <? include('db_connect.php'); $choice=isset($_GET['ch'])?$_GET['ch']:''; switch($choice){ case 'return': print "Thank You For Buying this product,Please Visit Again,If is there any complements Then suggest us.."; break; case 'ipn': $sql="INSERT INTO paypal(add_date) VALUES(now())"; mysql_query($sql); $x = fopen('test1.txt','w+'); $str2 = 'post data:dfydfhgfhjg'; foreach($_POST as $k=>$v){ $str2 .= $k.'--'.$v; } fwrite($x,$str2); fclose($x); break; case 'cancel': print "Thank You for visiting this site,Please inform your friend to buy products through paypal which is easy service... "; break; } ?>

    Read the article

  • Help me alter this query to get the desired results - New*

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >