Search Results

Search found 6497 results on 260 pages for 'minimum spanning tree'.

Page 67/260 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Fullscreen windowed mode in id games

    - by Oli
    I run a TwinView, dual monitor system. I like to play games fullscreen on one of the monitors, not spanning both. With wine, this works by just setting it to desktop mode and setting the resolution to that of one screen. For OpenTTD, I used Compiz's Window Rules plugin. But I have a few native games that this doesn't work for. Today's experiment involved Prey (Doom 3 engine) but I've had similar issues with other ID engines. So in short: has anybody found a way of having Prey/OpenAreana/Doom3/etc run in windowed mode but with fullscreen decorations (that is to say, no borders and above the panel)?

    Read the article

  • Oracle OpenWorld Update -- Scaling Infrastructure to Meet Business Growth: A Coherence Customer Panel

    - by Ruma Sanyal
    Today being the Monday of OpenWorld is packed with great content and sessions. I have already blogged about the general session by Ajay Patel and the classic Cloud Application Foundation roadmap and strategy session by Mike Lehmann. But we will be remiss if we don’t list the customer panel for Coherence. Come listen to customers spanning a wide variety of industries such as consumer goods, railways, and agricultural biotechnology discuss how Oracle Coherence enables business growth, cost cutting, and improved customer experience. You will learn how Coherence helps scale services cost-effectively, improve performance, and assure service availability in both on-premises and cloud deployments. Each customer will present details of their specific use cases, benefits and war stories of developing, deploying and managing some of the largest data grid deployments in the world. The session will be moderated by Cameron Purdy, VP of Development, and Mr. Coherence himself J For more information about this and other Coherence sessions, review the Coherence Focus on document. Details: Monday, 10/1, 12:15 p.m. - 1:15 p.m., Moscone South Room 309

    Read the article

  • Building Tag Cloud Declarative ADF Component

    - by Arunkumar Ramamoorthy
    When building a website, there could a requirement to add a tag cloud to let the users know the popular tags (or terms) used in the site. In this blog, we would build a simple declarative component to be used as tag cloud in the page. To start with, we would first create the declarative component, which could display the tag cloud. We will do that by creating a new custom application from the new gallery. Give a name for the app and the project and from the new gallery, let us create a new ADF Declarative Component We need to specify the name for the declarative component, attributes in it etc. as follows For displaying the tags as cloud, we need to pass the content to this component. So, we will create an attribute to hold the values for the tag. Let us name it as "value" and make it as java.lang.String  type. Once after this, to hold the component, we need to create a tag library. This can be done by clicking on the Add Tag Library button. Clicking on OK buttons in all the open dialogs would create a declarative component for us. Now, we need to display the tag cloud based on the value passed to the component. To do that, we assume that the value is a Tree Binding and has two attributes in it, say "Name" and "Weight". To make a tag cloud, we would put together the "Name" in a loop and set it's font size based on the "Weight". After putting our logic to work, here is how the source look Attributes added to the declarative components can be retrieved by using #{attrs.<attribute_name>}. Now, we need to deploy this project as ADF Library Jar file, so that this can be distributed to the consuming applications. We'll select ADF Library Jar as type and create the profile. We would be getting the jar file after deployment. To test the functionality, we could create a simple Fusion Web Application. To add our custom component to the consuming application, we can create a file system connection pointing to the location where the jar file is and add it or, add through the project properties of the ViewController project. Now, our custom component has been added to the consuming application. We could test that by creating a VO in the model project with a query like, select 'Faces' as Name,25 as Weight from dual union all select 'ADF', 15 from dual  union all select 'ADFdi', 30 from dual union all select 'BC4J', 20 from dual union all select 'EJB', 40 from dual union all select 'WS', 35 from dual Add this VO to the AppModule, so that it would be exposed to the data control. Then, we could create a jspx page, and add a tree binding to the VO created. We can now see our Tag Cloud declarative component is available in the component palette.  It can be inserted from the component palette to our page and set it's value property to CollectionModel of the tree binding created. Now that we've created the Declarative component and added that to our page successfully, we can run the page to see how it looks. As per the query, the Tags are displayed in different fonts, based on their weight.

    Read the article

  • OTN, T-Shirts, and Tunes at Mezzanine - Tuesday Oct 2.

    - by Bob Rhubart
    By now you've probably heard about the Oracle OpenWorld Music Festival, which will bring an incredible array of bands, spanning the spectrum of genres, to several venues throughout San Francisco. The festival runs Sunday through Thursday, with a break on Wednesday for the Oracle Appreciation Event on Treasure Island featuring Pearl Jam, Kings of Leon, and X. ***CORRECTION*** What you probably don't know is that OTN is sponsoring the Tuesday night Festival show at Mezzanine (444 Jessie Street at Mint), featuring:  GOLDEN STATEDEATH VALLEY HIGH LOW FLYING OWLS The OTN crew will be on hand, passing out t-shirts and resisting the temptation to misbehave. Mostly. 

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • Android Dynamic 2D Map

    - by Deltharis
    My problem is, I want to create a 2D tiled map. Yes, I know it's been asked a lot. I've seen answers that propose the use of tiled however it only allows (or so it seems to me) to generate static maps that do not change once generated. And I need a large empty uniform space of empty tiles, upon which players may place various buildings (some spanning more than one tile and logically being the same one). How to approach this in Android? Do I make some kind of TableLayout, use arbitrarly large amount of rows and imageviews (with my emptyTile), than somehow work event-based changing of image ids from there? I'd think that only a portion of that map should be visible at a time, but I don't see how scrolling around could be the part of that structure.

    Read the article

  • Orca: extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I looked up Orca (version 3.1), whose documentation states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines: orca -x theDirectory theModule.msm orca theModule.msm -x theDirectory ...and others. I get nowhere. The closest I've gotten was this: orca -q -x theDirectory -m theModule.msm ...but then it complains that I didn't specifiy a database to merge into. But I'm not trying to merge anything, no less into a database. I just want the files extracted. Can someone explain what I'm doing wrong with the command line options?

    Read the article

  • jstree will not fire onchange event

    - by vasion
    i have been really stuck on this. this is the code: js: var treeoptions={"data":{"type":"json","opts":{"url":"\/surveytags\/treejson"}}}; $('#treecontainer').tree(treeoptions); $("#treecontainer").tree({ callback : { ondblclk : function (node, tree) { alert(node.id); }, onmove : function (node,ref,type){ data= new Object(); data.node= new Object(); data.node.id = node.id; data.ref=new Object(); data.ref.id = ref.id; data.type = type; moveitem(data); }, onchange : function (){ alert('focused'); }, oncreate : function(node){ alert('create'); alert(node.data); } } }); this is the json: {"attributes":{"id":"1"},"data":{"title":"root"},"children":[{"attributes":{"id":"2"},"data":{"title":"blah"},"children":[{"attributes":{"id":"3"},"data":{"title":"tworows down"}},{"attributes":{"id":"4"},"data":{"title":"tooope"}}]}]} it loads. other events fire. BUT onchange will not...

    Read the article

  • HQL Query Not Running

    - by Sarang
    select new UpdateCountDataBean(count(elements(am.actionId)) as noOfUpdates, am.pname as name) from ActivityMaster am group by am.pname The UpdateCountDataBean is created by me while the second class i.e. ActivityMaster is POJO class. java.lang.NullPointerException at org.hibernate.hql.ast.tree.MethodNode.handleElements(MethodNode.java:158) at org.hibernate.hql.ast.tree.MethodNode.resolveCollectionProperty(MethodNode.java:109) at org.hibernate.hql.ast.tree.CollectionFunction.resolve(CollectionFunction.java:22) at org.hibernate.hql.ast.HqlSqlWalker.processFunction(HqlSqlWalker.java:835) at org.hibernate.hql.antlr.HqlSqlBaseWalker.collectionFunction(HqlSqlBaseWalker.java:2558) at org.hibernate.hql.antlr.HqlSqlBaseWalker.aggregateExpr(HqlSqlBaseWalker.java:2907) at org.hibernate.hql.antlr.HqlSqlBaseWalker.count(HqlSqlBaseWalker.java:2483) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1971) at org.hibernate.hql.antlr.HqlSqlBaseWalker.aliasedSelectExpr(HqlSqlBaseWalker.java:2057) at org.hibernate.hql.antlr.HqlSqlBaseWalker.constructor(HqlSqlBaseWalker.java:2226) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1952) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExprList(HqlSqlBaseWalker.java:1825) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectClause(HqlSqlBaseWalker.java:1394) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:553) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:281) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:229) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:228) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:160) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:111) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:77) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:56) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:72) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:133) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:112) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1623)

    Read the article

  • Extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I'm trying msidb.exe and orca.exe The documentation for orca states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines, usually starting with this: orca -x theDirectory theModule.msm I use "theDirectory" as whatever empty folder I want. Like I said - it didn't do anything. Then I tried msidb, where a couple of attempts I've made look like this: msidb -d theModule.msm -w {storage} msidb -d theModule.msm -x {stream} In both cases, I don't know what to insert for {storage} or {stream} to make it happy - I don't know what those represent. Can someone explain what I'm doing wrong with the command line options? Is there any other tool that can do this?

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • Progress Bar design patterns?

    - by shoosh
    The application I'm writing performs a length algorithm which usually takes a few minutes to finish. During this time I'd like to show the user a progress bar which indicates how much of the algorithm is done as precisely as possible. The algorithm is divided into several steps, each with its own typical timing. For instance- initialization (500 milli-sec) reading inputs (5 sec) step 1 (30 sec) step 2 (3 minutes) writing outputs (7 sec) shutting down (10 milli-sec) Each step can report its progress quite easily by setting the range its working on, say [0 to 150] and then reporting the value it completed in its main loop. What I currently have set up is a scheme of nested progress monitors which form a sort of implicit tree of progress reporting. All progress monitors inherit from an interface IProgressMonitor: class IProgressMonitor { public: void setRange(int from, int to) = 0; void setValue(int v) = 0; }; The root of the tree is the ProgressMonitor which is connected to the actual GUI interface: class GUIBarProgressMonitor : public IProgressMonitor { GUIBarProgressMonitor(ProgressBarWidget *); }; Any other node in the tree are monitors which take control of a piece of the parent progress: class SubProgressMonitor : public IProgressMonitor { SubProgressMonitor(IProgressMonitor *parent, int parentFrom, int parentLength) ... }; A SubProgressMonitor takes control of the range [parentFrom, parentFrom+parentLength] of its parent. With this scheme I am able to statically divide the top level progress according to the expected relative portion of each step in the global timing. Each step can then be further subdivided into pieces etc' The main disadvantage of this is that the division is static and it gets painful to make changes according to variables which are discovered at run time. So the question: are there any known design patterns for progress monitoring which solve this issue?

    Read the article

  • WPF UI Automation - AutomationElement.FindFirst fails when there are lots of elements

    - by Orion Edwards
    We've got some automated UI tests for our WPF app (.NET 4); these test use the UI Automation API's. We call AutomationElement.FindFirst to find a target element, and then interact with it. Example (pseudocode): var nameEquals = new PropertyCondition(AutomationElement.NameProperty, "OurAppWindow"); var appWindow = DesktopWindow.FindFirst(TreeScope.Children, nameEquals); // this succeeds var idEquals = new PropertyCondition(AutomationElement.AutomationIdProperty, "ControlId"); var someItem = appWindow.FindFirst(TreeScope.Descendants, idEquals); // this suceeds sometimes, and fails sometimes! The problem is, the appWindow.FindFirst will sometimes fail and return null, even when the element is present. I've written a helper function which walks the UI automation tree manually and prints it out, and the element with the correct ID is present in all cases. It seems to be related to how many other items are also being displayed in the window. If there are no other items then it always succeeds, but when there are many other complex UI elements being displayed alongside it, then the find fails. I can't find any documented element limit mentioned for any of the automation API's - is there some way around this? I'm thinking I might have to write my own implemententation of FindFirst which does the tree walk manually itself... As far as I can tell this should work, because my tree-printer utility function does exactly that, and it's ok, but it seems like this would be unnecessary and slow :-( Any help would be greatly appreciated

    Read the article

  • PHP function generate UL LI

    - by apis17
    hi.. i'm referring this address for function olLiTree http://stackoverflow.com/questions/753853/php-function-that-creates-a-nested-ul-li i have this array $tree = array("A"=array("B"=array("C"="C","D"="D"),"E"=array("F"="F","G"="G"))); but not able to use this function function olLiTree($tree) { echo '<ul>'; foreach($tree as $item) { if (is_array($item)) { olLiTree($item); } else { echo '<li>', $item, '</li>'; } } echo '</ul>'; } to generate <ul> <li>A</li> <li>B <ul> <li>C</li> <li>D</li> </ul> </li> <li>E <ul> <li>F </li> </ul> </li> <ul> <li>G</li> </ul> </ul> can anybody help me to fix this? thanks..

    Read the article

  • Parsing some particular statements with antlr3 in C target

    - by JCD
    Hello all! I have some questions about antlr3 with tree grammar in C target. I have almost done my interpretor (functions, variables, boolean and math expressions ok) and i have kept the most difficult statements for the end (like if, switch, etc.) 1) I would like interpreting a simple loop statement: repeat: ^(REPEAT DIGIT stmt); I've seen many examples but nothing about the tree walker (only a topic here with the macros MARK() / REWIND(m) + @init / @after but not working (i've antlr errors: "unexpected node at offset 0")). How can i interpret this statement in C? 2) Same question with a simple if statement: if: ^(IF condition stmt elseifstmt* elsestmt?); The problem is to skip the statement if the condition is false and test the other elseif/else statements. 3) I have some statements which can stop the script (like "break" or "exit"). How can i interrupt the tree walker and skip the following tokens? 4) When a lexer or parser error is detected, antlr returns an error. But i would like to make my homemade error messages. How can i have the line number where parser crashed? Ask me if you want more details. Thanks you very much (and i apologize for my poor english)

    Read the article

  • Spy++ for PowerBuilder applications

    - by Frerich Raabe
    Hi, I'm trying to write a tool which lets me inspect the state of a PowerBuilder-based application. What I'm thinking of is something like Spy++ (or, even nicer, 'Snoop' as it exists for .NET applications) which lets me inspect the object tree (and properties of objects) of some PowerBuilder-based GUI. I did the same for ordinary (MFC-based) applications as well as .NET applications already, but unfortunately I never developed an application in PowerBuilder myself, so I'm generally thinking about two problems at this point: Is there some API (preferably in Java or C/C++) available which lets one traverse the tree of visual objects of a PowerBuilder application? I read up a bit on the PowerBuilder Native Interface system, but it seems that this is meant to write PowerBuilder extensions in C/C++ which can then be called from the PowerBuilder script language, right? If there is some API available - maybe PowerBuilder applications even expose some sort of IPC-enabled API which lets me inspect the state of a PowerBuilder object hierarchy without being within the process of the PowerBuilder application? Maybe there's an automation interface available, or something COM-based - or maybe something else? Right now, my impression is that probably need to inject a DLL into the process of the PowerBuilder application and then gain access to the running PowerBuilder VM so that I can query it for the object tree. Some sort of IPC mechanism will then let me transport this information out of the PowerBuilder application's process. Does anybody have some experience with this or can shed some light on whether anybody tried to do this already? Best regards, Frerich

    Read the article

  • Automatically add links to class source files under a specified directory of another project in Visu

    - by Binary255
    I want to share some class source files between two projects in Visual Studio 2008. I can't create a project for the common parts and reference it (see my comment if you are curious to why). I've managed to share some source files, but it could be a lot more neat. I've created a test solution called Commonality. The Solution Explorer of the Commonality solution which contains project One and Two: What I like: All class files under the Common folder of project One are automatically added to project Two by linking. It's mostly the same as if I would have chosen Add / Existing Item... : Add As Link on each new class source file. It's clear that these files have been linked in. The shortcut arrow symbol is marking each file icon. What I do not like: The file and folder tree structure under Common of project One isn't included. It's all flat. The linked source files are shown under the project root of project Two. It would look much less cluttered if they were located under Common like in project One. The file tree structure of the Commonality solution which contains project One and Two: $ tree /F /A Folder PATH listing for volume Cystem Volume serial number is 0713370 1337:F6A4 C:. | Commonality.sln | +---One | | One.cs | | One.csproj | | | +---bin | | \---Debug | | One.vshost.exe | | One.vshost.exe.manifest | | | +---Common | | | Common.cs | | | CommonTwo.cs | | | | | \---SubCommon | | CommonThree.cs | | | +---obj | | \---Debug | | +---Refactor | | \---TempPE | \---Properties | AssemblyInfo.cs | \---Two | Two.cs | Two.csproj | Two.csproj.user | Two.csproj~ | +---bin | \---Debug +---obj | \---Debug | +---Refactor | \---TempPE \---Properties AssemblyInfo.cs And the relevant part of project Two's project file Two.csproj: <ItemGroup> <Compile Include="..\One\Common\**\*.cs"> </Compile> <Compile Include="Two.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> How do I address what I do not like, while keeping what I like?

    Read the article

  • Avoid the collapsing effect on TreeView after updating data

    - by Manolete
    I have a TreeView used to display events. It works great, however every time new events are coming in and populating the tree collapse the tree again to the original position. That is very annoying when the refresh time is less than 1 second and it does not allow the user to interact with the items of the tree. Is there any way to avoid this behaviour? <TreeView Margin="1" BorderThickness="0" Name="eventsTree" ItemsSource="{Binding EventAlertContainers}" Background="#00000000" ScrollViewer.VerticalScrollBarVisibility="Auto" FontSize="14" VirtualizingStackPanel.IsVirtualizing="True"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type C:EventAlertContainer}" ItemsSource="{Binding EventAlerts}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="3,0" Source="Resources\Process_info_32.png" /> <TextBlock FontWeight="Bold" FontSize="16" Text="{Binding Description}" /> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type C:EventAlert}" ItemsSource="{Binding Events}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="0,0" Source="Resources\clock2_32.jpg" /> <TextBlock FontWeight="DemiBold" FontSize="14" Text="{Binding Name}" /> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type C:Event}"> <StackPanel Orientation="Horizontal"> <Image Width="20" Height="20" Margin="0,0" Source="Resources\Task_32.png" /> <StackPanel Orientation="Vertical"> <TextBlock FontSize="12" Text="{Binding Name}" /> </StackPanel> </StackPanel> </HierarchicalDataTemplate> </TreeView.Resources> </TreeView>

    Read the article

  • Returning recursive ternary freaks out

    - by David Titarenco
    Hi, assume this following function: int binaryTree::findHeight(node *n) { if (n == NULL) { return 0; } else { return 1 + max(findHeight(n->left), findHeight(n->right)); } } Pretty standard recursive treeHeight function for a given binary search tree binaryTree. Now, I was helping a friend (he's taking an algorithms course), and I ran into some weird issue with this function that I couldn't 100% explain to him. With max being defined as max(a,b) ((a)>(b)?(a):(b)) (which happens to be the max definition in windef.h), the recursive function freaks out (it runs something like n^n times where n is the tree height). This obviously makes checking the height of a tree with 3000 elements take very, very long. However, if max is defined via templating, like std does it, everything is okay. So using std::max fixed his problem. I just want to know why. Also, why does the countLeaves function work fine, using the same programmatic recursion? int binaryTree::countLeaves(node *n) { if (n == NULL) { return 0; } else if (n->left == NULL && n->right == NULL) { return 1; } else { return countLeaves(n->left) + countLeaves(n->right); } } Is it because in returning the ternary function, the values a => countLeaves(n->left) and b => countLeaves(n->right) were recursively double called simply because they were the resultants? Thank you!

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Automatically add links to class source files under a specified directory of an another project in V

    - by Binary255
    I want to share some class source files between two projects in Visual Studio 2008. I can't create a project for the common parts and reference it (see my comment if you are curious to why). I've managed to share some source files, but it could be a lot more neat. I've created a test solution called Commonality. The Solution Explorer of the Commonality solution which contains project One and Two: What I like: All class files under the Common folder of project One are automatically added to project Two by linking. It's mostly the same as if I would have chosen Add / Existing Item... : Add As Link on each new class source file. It's clear that these files have been linked in. The shortcut arrow symbol is marking each file icon. What I do not like: The file and folder tree structure under Common of project One isn't included. It's all flat. The linked source files are shown under the project root of project Two. It would look much less cluttered if they were located under Common like in project One. The file tree structure of the Commonality solution which contains project One and Two: $ tree /F /A Folder PATH listing for volume Cystem Volume serial number is 0713370 1337:F6A4 C:. | Commonality.sln | +---One | | One.cs | | One.csproj | | | +---bin | | \---Debug | | One.vshost.exe | | One.vshost.exe.manifest | | | +---Common | | | Common.cs | | | CommonTwo.cs | | | | | \---SubCommon | | CommonThree.cs | | | +---obj | | \---Debug | | +---Refactor | | \---TempPE | \---Properties | AssemblyInfo.cs | \---Two | Two.cs | Two.csproj | Two.csproj.user | Two.csproj~ | +---bin | \---Debug +---obj | \---Debug | +---Refactor | \---TempPE \---Properties AssemblyInfo.cs And the relevant part of project Two's project file Two.csproj: <ItemGroup> <Compile Include="..\One\Common\**\*.cs"> </Compile> <Compile Include="Two.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> How do I address what I do not like, while keeping what I like?

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • Brackets matching using BIT

    - by amit.codename13
    edit: I was trying to solve a spoj problem. Here is the link to the problem : http://spoj.pl/problems/BRCKTS I can think of two possible data structures for solving the problem one using segment tree and the other using a BIT. I have already implemented the solution using a segment tree. I have read about BIT but i can't figure out how to do a particular thing with it(which i have mentioned below) I am trying to check if brackets are balanced in a given string containing only ('s or )'s. I am using a BIT(Binary indexed tree) for solving the problem. The procedure i am following is as follows: I am taking an array of size equal to the number of characters in the string. I am assigning -1 for ) and 1 for ( to the corresponding array elements. Brackets are balanced in the string only if the following two conditions are true. The cumulative sum of the whole array is zero. Minimum cumulative sum is non negative. i.e the minimum of cumulative sums of all the prefixes of the array is non-negative. Checking condition 1 using a BIT is trivial. I am facing problem in checking condition 2.

    Read the article

  • Enabling new admin action(button sales_order/view) in ACL

    - by latvian
    Hi, We created new action similar to 'hold', 'ship' and others in the 'sales_order/view' admin section that can be triggered by clicking at the button. Afterward, we added our new action to the ACL with the following code in config.xml: <acl> <resources> <admin> <children> <sales> <children> <order> <children> <actions translate="title"> <title>Actions</title> <children> <shipNew translate="title"><title>Ship Ups</title></shipNew> </children> </actions> </children> <sort_order>10</sort_order> </order> </children> </sales> </children> </admin> </resources> </acl> ACL functionality works, however, in the 'Resources Tree'(System/Permissions/Roles/Role Resources) our new action does never show up as selected(checked) even thou it is allowed for particular Role. I can see that from table 'admin_rule' with resource id for our new action that it is allowed, so it needs to be selected, but it is not. When trying to solve this issue i looked into the template(permissions/rolesedit.phtml) and I found that the 'resource tree' is draw with Javascript...thats where i got stock due to my limited knowledge in Javascript. Why the resource tree does not display our new ACL entry correctly, that is the check box is never checked? Thank You for helping margots

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >