Search Results

Search found 8605 results on 345 pages for 'general dynamics'.

Page 13/345 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Oracle Enterprise Manager 12c Anniversary at Open World General Session and Twitter Chat using #em12c on October 2nd

    - by Anand Akela
    As most of you will remember, Oracle Enterprise Manager 12c was announced last year at Open World. We are celebrating first anniversary of Oracle Enterprise Manager 12c next week at Open world. During the last year, Oracle customers have seen the benefits of federated self-service access to complete application stacks, elastic scalability, automated metering, and charge-back from capabilities of Oracle Enterprise manager 12c. In this session you will learn how customers are leveraging Oracle Enterprise Manager 12c to build and operate their enterprise cloud. You will also hear about Oracle’s IT management strategy and some new capabilities inside the Oracle Enterprise Manager product family. In this anniversary general session of Oracle Enterprise Manager 12c, you will also watch an interactive role play ( similar to what some of you may have seen at "Zero to Cloud" sessions at the Oracle Cloud Builder Summit ) depicting a fictional company in the throes of deploying a private cloud. Watch as the CIO and his key cloud architects battle with misconceptions about enterprise cloud computing and watch how Oracle Enterprise Manager helps them address the key challenges of planning, deploying and managing an enterprise private cloud. The session will be led by Sushil Kumar, Vice President, Product Strategy and Business Development, Oracle Enterprise Manager. Jeff Budge, Director, Global Oracle Technology Practice, CSC Consulting, Inc. will join Sushil for the general session as well. Following the general session, Sushil Kumar ( Twitter user name @sxkumar ) will join us for a Twitter Chat on Tuesday at 1:00 PM to 2:00 PM.  Sushil will answer any follow-up questions from the general session or any question related to Oracle Enterprise Manager and Oracle Private Cloud . You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c (Needs Twitter credential for participating).  You could pre-submit your questions for Sushil using any of the social media channels mentioned below. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Visiting the Fire Station in Coromandel

    Hm, I just tried to remember how we actually came up with this cool idea... but it's already too blurred and it doesn't really matter after all. Anyway, if I remember correctly (IIRC), it happened during one of the Linux meetups at Mugg & Bean, Bagatelle where Ajay and I brought our children along and we had a brief conversation about how cool it would be to check out one of the fire stations here in Mauritius. We both thought that it would be a great experience and adventure for the little ones. An idea takes shape And there we go, down the usual routine these... having an idea, checking out the options and discussing who's doing what. Except this time, it was all up to Ajay, and he did a fantastic job. End of August, he told me that he got in touch with one of his friends which actually works as a fire fighter at the station in Coromandel and that there could be an option to come and visit them (soon). A couple of days later - Confirmed! Be there, and in time... What time? Anyway, doesn't really matter... Everything was settled and arranged. I asked the kids on Friday afternoon if they might be interested to see the fire engines and what a fire fighter is doing. Of course, they were all in! Getting up early on Sunday morning isn't really a regular exercise for all of us but everything went smooth and after a short breakfast it was time to leave. Where are we going? Are we there yet? Now, we are in Bambous. Why do you go this way? The kids were so much into it. Absolutely amazing to see their excitement. Are we there yet? Well, we went through the sugar cane fields towards Chebel and then down into the industrial zone at Coromandel. Honestly, I had a clue where the fire station is located but having Google Maps in reach that shouldn't be a problem in case that we might get lost. But my worries were washed away when our children guided us... "There! Over there are the fire engines! We have to turn left, dad." - No comment, the kids were right! As we were there a little bit too early, we parked the car and the kids started to explore the area and outskirts of the fire station. Some minutes later, as if we had placed an order a unit of two cars had to go out for an alarm and the kids could witness them leaving as closely as possible. Sirens on and wow!!! Ladder truck L32 - MAN truck with Rosenbauer built-up and equipment by Metz Taking the tour Ajay arrived shortly after that and guided us finally inside the station to meet with his pal. The three guys were absolutely well-prepared and showed us around in the hall, explaining that there two units out at the moment. But the ladder truck (with max. 32m expandable height) was still around we all got a great insight into the technique and equipment on the vehicle. It was amazing to see all three kids listening to Mambo as give some figures about the truck and how the fire fighters are actually it. The children and 'our' fire fighters of the day had great fun with the various fire engines Absolutely fantastic that the children were allowed to experience this - we had so much fun! Ajay's son brought two of his toy fire engines along, shared them with ours, and they all played very well together. As a parent it was really amazing to see them at such an ease. Enough theory Shortly afterwards the ladder truck was moved outside, got stabilised and ready to go for 'real-life' exercising. With the additional equipment of safety helmets, security belts and so on, we all got a first-hand impression about how it could be as a fire-fighter. Actually, I was totally amazed by the curiousity and excitement of my BWE. She was really into it and asked lots of interesting questions - in general but also technical. And while our fighters were busy with Ajay and family, I gave her some more details and explanations about the truck, the expandable ladder, the safety cage at the top and other equipment available. Safety first! No exceptions and always be prepared for the worst case... Also, the equipped has been checked prior to excuse - This is your life saver... Hooked up and ready to go... ...of course not too high. This is just a demonstration - and 32 meters above ground isn't for everyone. Well, after that it was me that had the asking looks on me, and I finally revealed to the local fire fighters that I was in the auxiliary fire brigade, more precisely in the hazard department, for more than 10 years. So not a professional fire fighter but at least a passionate and educated one as them. Inside the station Our fire fighters really took their time to explain their daily job to kids, provided them access to operation seat on the ladder truck and how the truck cabin is actually equipped with the different radios and so on. It was really a great time. Later on we had a brief tour through the building itself, and again all of our questions were answered. We had great fun and started to joke about bits and pieces. For me it was also very interesting to see the comparison between the fire station here in Mauritius and the ones I have been to back in Germany. Amazing to see them completely captivated in the play - the children had lots of fun! Also, that there are currently ten fire stations all over the island, plus two additional but private ones at the airport and at the harbour. The newest one is actually down in Black River on the west coast because the time from Quatre Bornes takes too long to have any chance of an effective alarm at all. IMHO, a very good decision as time is the most important factor in getting fire incidents under control. After all it was great experience for all of us, especially for the children to see and understand that their toy trucks are only copies of the real thing and that the job of a (professional) fire fighter is very important in our society. Don't forget that those guys run into the danger zone while you're trying to get away from it as much as possible. Another unit just came back from a grass fire - and shortly after they went out again. No time to rest, too much to do! Mauritian Fire Fighters now and (maybe) in the future... Thank you! It was an honour to be around! Thank you to Ajay for organising and arranging this Sunday morning event, and of course of Big Thank You to the three guys that took some time off to have us at the Fire Station in Coromandel and guide us through their daily job! And remember to call 115 in case of emergencies!

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • What does "general purpose system" mean for Java SE Embedded?

    - by Majid Azimi
    The Oracle website says this about Java SE Embedded license: development is free, but royalties are required upon deployment on anything other than general purpose systems What does "general purpose system" mean here? We have a sensor network around the country. On each box we have installed, there is a micro controller based board that gets data from the environment and send data on serial port to a ARM based embedded board. On this board system there is a Java process which reads and submits data to our central server using JMS. Is this categorized as general purpose system? Sorry I'm asking this here. We are in Iran, there is no Oracle office here to ask.

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Portuguese Pointer. She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.     Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'Asbsa_hzfHl69XF3wxBd_WbW0dLNTRUH3ZHQG9qcV5EFRLuWEaOP1hjWdZ0A0P17',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });             // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; }   Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }   Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave. Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'XXXXXXXXXXXXXXXXXXX',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });           // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; } Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }  Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • Que pensez-vous de Microsoft Dynamics CRM Online ? Testez gratuitement pendant 30 jours la solution CRM de Microsoft et partagez votre opinion

    Que pensez-vous de Microsoft Dynamics CRM Online? Testez gratuitement pendant 30 jours la solution CRM Cloud de Microsoft et partagez votre opinion La CRM (Customer Relationship Management) se situe de nos jours comme un outil indispensable pour capter, traiter, analyser les informations relatives aux clients dans le but de les fidéliser. Parmi les solutions de CRM actuellement disponibles sur le marché, figure en bonne place Microsoft Dynamics CRM. [IMG]http://rdonfack.developpez.com/images/MicrosoftDynamicCRM.jpg[/IMG] Le service bénéficie des fonctionnalités de Cloud Computing, permettant un accès instantané à vos données, où que vous soyez, à partir d'u...

    Read the article

  • Lots of first chance Microsoft.CSharp.RuntimeBinderExceptions thrown when dealing with dynamics

    - by Orion Edwards
    I've got a standard 'dynamic dictionary' type class in C# - class Bucket : DynamicObject { readonly Dictionary<string, object> m_dict = new Dictionary<string, object>(); public override bool TrySetMember(SetMemberBinder binder, object value) { m_dict[binder.Name] = value; return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { return m_dict.TryGetValue(binder.Name, out result); } } Now I call it, as follows: static void Main(string[] args) { dynamic d = new Bucket(); d.Name = "Orion"; // 2 RuntimeBinderExceptions Console.WriteLine(d.Name); // 2 RuntimeBinderExceptions } The app does what you'd expect it to, but the debug output looks like this: A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll 'ScratchConsoleApplication.vshost.exe' (Managed (v4.0.30319)): Loaded 'Anonymously Hosted DynamicMethods Assembly' A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll Any attempt to access a dynamic member seems to output a RuntimeBinderException to the debug logs. While I'm aware that first-chance exceptions are not a problem in and of themselves, this does cause some problems for me: I often have the debugger set to "break on exceptions", as I'm writing WPF apps, and otherwise all exceptions end up getting converted to a DispatcherUnhandledException, and all the actual information you want is lost. WPF sucks like that. As soon as I hit any code that's using dynamic, the debug output log becomes fairly useless. All the useful trace lines that I care about get hidden amongst all the useless RuntimeBinderExceptions Is there any way I can turn this off, or is the RuntimeBinder unfortunately just built like that? Thanks, Orion

    Read the article

  • News feed APIs for general news

    - by dassouki
    I'm building a database + tool that scours news feeds for a certain term. For example "food poisoning from nuts". I want to scour social media sites, news sites, major news aggregators, etc... for that term. Question 1: What are some of the news aggregator APIs out there? Question 2: How Would you go about coding and receiving only the latest news from the API?

    Read the article

  • About unit testing a function in the zend framework and unit testing in general

    - by sanders
    Hello people, I am diving into the world of unit testing. And i am sort of lost. I learned today that unit testing is testing if a function works. I wanted to test the following function: public function getEventById($id) { return $this->getResource('Event')->getEventById($id); } So i wanted to test this function as follows: public function test_Event_Get_Event_By_Id_Returns_Event_Item() { $p = $this->_model->getEventById(42); $this->assertEquals(42, EventManager_Resource_Event_Item_Interface); $this->assertType('EventManager_Resource_Event_Item_Interface', $p); } But then I got the error: 1) EventTest::test_Event_Get_Event_By_Id_Returns_Event_Item Zend_Db_Table_Exception: No adapter found for EventManager_Resource_Event /home/user/Public/ZendFramework-1.10.1/library/SF/Model/Abstract.php:101 /var/www/nrka2/application/modules/eventManager/models/Event.php:25 But then someone told me that i am currently unit testing and not doing an integration test. So i figured that i have to test the function getEventById on a different way. But I don't understand how. What this function does it just cals a resource and returns the event by id.

    Read the article

  • Just a general THANK YOU to EVERYONE. [closed]

    - by ajax81
    Hi All, I really just wanted to thank everybody that participates in the stackoverflow community. On more than one occasion, your minds have saved me from soul-eating project managers and career-ending deadlines. The commendable awareness exhibited by contributors that their answers are studied/used as learning material by millions of developers all over the world has created a regulated trust that seemingly keeps the nonsense (and egos) at the bottom of the barrel and out of the way. As an up-and-coming developer with so much to learn, I am grateful for each and every one of their patient contributions. I wish I could come up with a catchy/funny sign-off that makes everybody feel good, but I lack the funny bone that so many of the people on this site seem to have been born with. Instead, I can only leave my gratitude and a promise that as long as the community stays this great, I'll stay an avid reader...and one day be experienced enough to carry the torch of contribution. Sincerely, Daniel the Intern

    Read the article

  • Unit testing a database connection and general questions on database-dependent code and unit testing

    - by dotnetdev
    Hi, If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way? From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method? Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset. EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not? THnaks

    Read the article

  • Integrating Dynamics CMS with Sharepoint ASCX SecurityException Issue

    - by Gavin
    Hi, I've an ASCX control (WebParts aren't used in this solution) which interrogates CMS 4's data via the API provided by Microsoft.Crm.Sdk and Microsoft.Crm.SdkTypeProxy. The solution works until it's deployed to Sharepoint. Initially I received the following error: [SecurityException: That assembly does not allow partially trusted callers.] MyApp.SharePoint.Web.Applications.MyAppUtilities.RefreshUserFromCrm(String login) +0 MyApp.SharePoint.Web.Applications.MyApp_LoginForm.btnLogin_Click(Object sender, EventArgs e) +30 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 Then I tried wrapping the calling code in the ASCX with SPSecurity.RunWithElevatedPrivileges: SPSecurity.RunWithElevatedPrivileges(delegate() { // FBA user may not exist yet or require refreshing MyAppUtilities.RefreshUserFromCrm(txtUser.Text); }); But this resulted in the following error: [SecurityException: Request for the permission of type 'Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' failed.] MyApp.SharePoint.Web.Applications.MyApp_LoginForm.btnLogin_Click(Object sender, EventArgs e) +0 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 When I elevate the trust level in the Sharepoint site to full everything works fine, however I need to come up with a solution that uses minimal trust (or a customised minimal trust). I'm also trying to stay clear of adding anything to the GAC. Any ideas? I assume the issue is occuring when trying to call functionality from Microsoft.Crm.* Thanks in advance for any help anyone can provide. Cheers, Gavin

    Read the article

  • General Web Programming/designing Question: ?

    - by Prasad
    hi, I have been in web programming for 2 years (Self taught - a biology researcher by profession). I designed a small wiki with needed functionalities and a scientific RTE - ofcourse lot is expected. I used mootools framework and AJAX extensively. I was always curious when ever I saw the query strings passed from URL. Long encrypted query string directly getting passed to the server. Especially Google's design is such. I think this is the start of providing a Web Service to a client - I guess. Now, my question is : is this a special, highly professional, efficient / advanced web design technique to communicate queries via the URL ? I always felt that direct URL based communication is faster. I tried my bit and could send a query through the URL directly. here is the link: http://sgwiki.sdsc.edu/getSGMPage.php?8 By this , the client can directly link to the desired page instead of searching and / or can automate. There are many possibilities. The next request: Can I be pointed to such technique of web programming? oops: I am sorry, If I have not been able to convey my request clearly. Prasad.

    Read the article

  • Excluding a script from the general UrlRewrite rules

    - by Steven
    Hi, I have following rewrite rules for a website: RewriteEngine On # Stop reading config files RewriteCond %{REQUEST_FILENAME} .*/web.config$ [NC,OR] RewriteCond %{REQUEST_FILENAME} .*/\.htaccess$ [NC] RewriteRule ^(.+)$ - [F] # Rewrite to url RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !^(/bilder_losning/|/bilder/|/gfx/|/js/|/css/|/doc/).* RewriteRule ^(.+)$ index.cfm?smartLinkKey=%{REQUEST_URI} [L] Now I have to exclude a script including its eventually querystrings from the above rules, so that I can access and execute it on the normal way, at the moment the whole url is being ignored and forwarded to the index page. I need to have access to the script shoplink.cfm in the root which takes variables tduid and url (shoplink.cfm?tduid=1&url=) I have tried to resolve it using this: # maybe?: RewriteRule !(^/shoplink.cfm [QSA] but to be honest, I have not much of a clue of urlrewriting and have no idea what I am supposed to write. I just know that above will generate a nice 500 error. I have been looking around a lot on stackoverflow and other websites on the same subject, but all I see is people trying to exclude directories, not files. In the worst case I could add the script to a seperate directory and exclude the directory from the rewriterules, but rather not since the script should really remain in the root. Just also tried: RewriteRule ^/shoplink.cfm$ $0 [L] but that didn't do anything either. Anyone who can help me out on this subject? Thanks in advance. Steven Esser ColdFusion programmer

    Read the article

  • Union struct produces garbage and general question about struct nomenclature

    - by SoulBeaver
    I read about unions the other day( today ) and tried the sample functions that came with them. Easy enough, but the result was clear and utter garbage. The first example is: union Test { int Int; struct { char byte1; char byte2; char byte3; char byte4; } Bytes; }; where an int is assumed to have 32 bits. After I set a value Test t; t.Int = 7; and then cout cout << t.Bytes.byte1 << etc... the individual bytes, there is nothing displayed, but my computer beeps. Which is fairly odd I guess. The second example gave me even worse results. union SwitchEndian { unsigned short word; struct { unsigned char hi; unsigned char lo; } data; } Switcher; Looks a little wonky in my opinion. Anyway, from the description it says, this should automatically store the result in a high/little endian format when I set the value like Switcher.word = 7656; and calling with cout << Switcher.data.hi << endl The result of this were symbols not even defined in the ASCII chart. Not sure why those are showing up. Finally, I had an error when I tried correcting the example by, instead of placing Bytes at the end of the struct, positioning it right next to it. So instead of struct {} Bytes; I wanted to write struct Bytes {}; This tossed me a big ol' error. What's the difference between these? Since C++ cannot have unnamed structs it seemed, at the time, pretty obvious that the Bytes positioned at the beginning and at the end are the things that name it. Except no, that's not the entire answer I guess. What is it then?

    Read the article

  • codeigniter and OOP general question regarding calling functions, and the parent constuctor

    - by thrice801
    Ok so I have some gaps in my understanding of PHP OOP, classes and functions specifically, this whole constructor class deal. I use both Zend and CI but right now Im trying to figure this out in CI as it is less complicated. So all Im trying to do is understand how to call a function from a view page in code igniter. I understand that might go against MVC but Im working with an api and search results not from my database, so basically, I want to define a function in my class that I am able to call in one of my view pages.. and I keep getting "Fatal error: Call to undefined function functionname" error no matter what I try. I thought I just had to declare public function testing() { echo "testing testing 123; } but calling that from the view I get that error. Then I read something about having to go parent::Controller(); in the index of the class where the testing function also resides? But that didnt work either. Anyways, ya, can someone explain what I need to do in order to call the "testing()" function on one of my view pages? and clarification on the constructor class and what exactly parent::Controller() even does, would be much appreciated as well.

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • General Question About WPF Control Behavior and using Invoke

    - by Phil Sandler
    I have been putting off activity on SO because my current reputation is "1337". :) This is a question of "why" and not "how". By default, it seems that WPF does not set focus to the first control in a window when it's opening. In addition, when a textbox gets focus, by default it does not have it's existing text selected. So basically when I open a window, I want focus on the first control of the window, and if that control is a textbox, I want it's existing text (if any) to be selected. I found some tips online to accomplish each of these behaviors, and combined them. The code below, which I placed in the constructor of my window, is what I came up with: Loaded += (sender, e) => { MoveFocus(new TraversalRequest(FocusNavigationDirection.Next)); var textBox = FocusManager.GetFocusedElement(this) as TextBox; if (textBox != null) { Action select = textBox.SelectAll; //for some reason this doesn't work without using invoke. Dispatcher.Invoke(DispatcherPriority.Loaded, select); } }; So, my question. Why does the above not work without using Dispatcher.Invoke? Is something built into the behavior of the window (or textbox) cause the selected text to be de-selected post-loading? Maybe related, maybe not--another example of where I had to use Dispatcher.Invoke to control the behavior of a form: http://stackoverflow.com/questions/2602979/wpf-focus-in-tab-control-content-when-new-tab-is-created

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >