Search Results

Search found 13880 results on 556 pages for 'explicit interface'.

Page 245/556 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Jupiter in Ubuntu 13.10 (Laptop Overheating)

    - by Daniel Pacheco
    I was wondering if Jupiter (Interface for display, power and device control) will work in Ubuntu 13.10, because my laptop (Toshiba Satellite C855D, AMD A6-4400M with Radeon HD Graphics running Ubuntu 13.04 x64) keeps overheating, I tried some other tools, like laptop-mode-tools or TLP, none of those work, not at all. Jupiter was the only option and it's supposedly discontinued, the version I'm using is being maintained by JoliCloud team, but they told me they're not sure if it will work with 13.10... If it doesn't work, I'm definitely not upgrading, since overheating is a major issue for me... Thanks in advance!

    Read the article

  • Siebel Open UI Training for Oracle EMEA CRM Partners - Free - Utrecht NL- January 22/23 2012

    - by Richard Lefebvre
    Have you heard about Siebel Open UI? It is the new, state-of-the-art User Interface for Siebel, offering an amazing User Experience on any browser. Oracle is planning a free of charge 2 days training, delivered by Oracle Product Development specialists, in Utrecht (NL) on January 22&23 2012. Seats are very limited. If you or your colleagues are interested to apply for one, please send an eMail to [email protected] with the contact details of the individuals who you would like to nomminate. If you would like to know more about Siebl Open UI before applying, please send an eMail to [email protected] to receive a short PPT deck featuring a short Siebel Open UI description, its benefits for (System Integrators) partners, and the detailed agenda.  Selected Participants will then be invited to register via the Oracle APEX system.

    Read the article

  • RIM présente BlackBerry 6.0, le futur OS de ses téléphones

    Mise à jour du 28.04.2010 par Katleen RIM présente BlackBerry 6.0, le futur OS de ses téléphones BlackBerry. Les téléphones des professionnels par excellence. Une recette qui marche, et qui n'avait pas changé depuis un moment. Hier cependant, son fabriquant RIM a présenté la future version du système d'exploitation de ces mobiles. L'interface du BlackBerry OS 6.0 est efficace, même si elle emprunte beaucoup à d'autres produits, comme l'iPhone ou le Zune. Elle est très fonctionnelle et cohérente. Au niveau technique, le navigateur qui recevait des retours mitigés des utilisateurs, sera équipé d'une version récente du moteur de rendu HTML Webkit et affichera donc le...

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Embarcadero lance DBArtisan XE, un nouvel outil multiplateforme pour administrer des bases de donnée

    Embarcadero lance DBArtisan XE Un outil multiplateforme pour administrer des bases de données hétérogènes DBArtisan XE est un outil d'administration de bases de données hétérogènes, avec déploiement et mode de licence centralisés (et à la demande). Il est le dernièr produit en date de la famille d'outils DBArtisan édités par Embarcadero Technologies. Avec un support natif pour plusieurs plateformes de bases de données, DBArtisan XE permet par exemple aux administrateurs de bases de données de maximiser ? depuis une interface unique ? les performances, la disponibilité ou la sécurité de leurs bases quel que soit leur type. L'éditeur met en avant des "diagnostiques i...

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What package do I need to install to develop plugins for gedit?

    - by Wes
    I'm using Ubuntu 12.04 with python 2.7.3 and PyGObject and I'd like to develop plugins for Gedit in python. I found a simple looking tutorial for this sort of thing here. According to the tutorial, I need the Gedit module to interact with the plugin interface: from gi.repository import GObject, Gedit I keep getting an import error when trying to import the Gedit module. So, my question is: what package do I need to install to get this module? I've tried: gedit-dev , gedit-plugins Edit: Here is the full traceback for the above statement: ERROR:root:Could not find any typelib for Gedit Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name Gedit

    Read the article

  • Square Reader Modified to Record Off Old Reel-to-Reel Tape [Video]

    - by Jason Fitzpatrick
    The Square Reader is a tiny magnetic credit card reader that has taken the mobile payment industry by storm. This clever hack dumps the credit card reading in favor of snagging the audio from old music reels. Evan Long was curious about whether the through-the-headphones interface of the Square Reader could be used to read audio data off old magnetic recordings. With a very small modification (he had to bend a metal tab inside the reader to allow the audio tape to slide through more easily) he was able to listen to and record audio off old reels. Watch the video above to see it in action or hit up the link below to read more about his project. iPod Meets Reel [via Make] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • New Release of ROracle posted to CRAN

    - by mhornick
    Oracle recently updated ROracle to version 1.1-2 on CRAN with enhancements and bug fixes. The major enhancements include the introduction of Oracle Wallet Manager and support for datetime and interval types.  Oracle Wallet support in ROracle allows users to manage public key security from the client R session. Oracle Wallet allows passwords to be stored and read by Oracle Database, allowing safe storage of database login credentials. In addition, we added support for datetime and interval types when selecting data, which expands ROracle's support for date data.  See the ROracle NEWS for the complete list of updates. We encourage ROracle users to post questions and provide feedback on the Oracle R Forum. In addition to being a high performance database interface to Oracle Database from R for general use, ROracle supports database access for Oracle R Enterprise.

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Setup basic proxypass in Apache

    - by Eric
    I have a web application that communicates to a web service deployed on the same server. The web app was written with Tibco General Interface and works well only when it is running locally on the development system. When I deploy the web app to the Apache server it fails with code 200 apparently due to cross domain data. I use Firefox as a browser. I have tried changing Internet Explorer to access cross domain data and it works however IE is not an option. Web application runs on 192.168.2.205 (port 80). Web service runs on 192.168.2.205:8040 I have tried a number of things with proxypass inside Apache with no luck.

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    - by mehfuzh
    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start let’s first consider the following dummy database and entity class. public class Person {     public virtual string Name { get; set; }     public virtual int Age { get; set; } }   public interface IDataBase {     T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>(). By default, the behavior in JustMock is override , which is similar to other popular mocking tools. By override it means that the tool will consider always the latest user setup. Therefore, the first example will return the latest entity every-time and will fail in line #12: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   Mock.Arrange(() => database.Get<Person>()).Returns(() => queue.Dequeue()); Mock.Arrange(() => database.Get<Person>()).Returns(person2);   // this will fail Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode());   Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); We can solve it the following way using a Queue and that removes the item from bottom on each call: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   queue.Enqueue(person1); queue.Enqueue(person2);   Mock.Arrange(() => database.Get<Person>()).Returns(queue.Dequeue());   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); This will ensure that right entity is returned but this is not an elegant solution. So, in JustMock we introduced a  new option that lets you set up your expectations sequentially. Like: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Mock.Arrange(() => database.Get<Person>()).Returns(person1).InSequence(); Mock.Arrange(() => database.Get<Person>()).Returns(person2).InSequence();   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); The  “InSequence” modifier will tell the mocking tool to return the expected result as in the order it is specified by user. The solution though pretty simple and but neat(to me) and way too simpler than using a collection to solve this type of cases. Hope that helps P.S. The example shown in my blog is using interface don’t require a profiler  and you can even use a notepad and build it referencing Telerik.JustMock.dll, run it with GUI tools and it will work. But this feature also applies to concrete methods that includes JM profiler and can be implemented for more complex scenarios.

    Read the article

  • Resetting Your Oracle User Password with SQL Developer

    - by thatjeffsmith
    There’s nothing more annoying than having to email, call, or log a support ticket to have one of your accounts reset. This is no less annoying in the Oracle database. Those pesky security folks have determined that your password should only be valid for X days, and your time is up. Time to reset the password! Except…you can’t log into the database to reset your password. What now? Wait a second, look at this nifty thing I see in SQL Developer: Right click on my connection, reset password not available! Why not? The JDBC Driver Doesn’t Support This Operation We can’t make this call over the Oracle JDBC layer, because it hasn’t been implemented. However our primary interface, OCI, does indeed support this. In order to use the Oracle Call Interface (OCI), you need to have an Oracle Client on your machine. The good news is that this is fairly easy to get going. The Instant Client will do. You have two options, the full or ‘Lite’ Instant Clients. If you want SQL*Plus and the other client tools, go for the full. If you just want the basic drivers, go for the Lite. Either of these is fine, but mind the bit level and version of Oracle! Make sure you get a 32 bit Instant Client if you run 32 bit SQL Developer or 64 bit if you run 64 Here’s the download link What, you didn’t believe me? Mind the version of Oracle too! You want to be at the same level or higher of the database you’re working with. You can use a 11.2.0.3 client with 11.2.0.1 database but not a 10gR2 client with 11gR2 database. Clear as mud? Download and Extract Put it where you want – Program Files is as good as place as any if you have the rights. When you’re done, copy that directory path you extracted the archive to, because we’re going to add it to your Windows PATH environment variable. The easiest way to find this in Windows 7 is to open the Start dialog and type ‘path’. In Windows 8 you’ll cast your spell and wave at your screen until something happens. I recommend you put it up front so we find our DLLs first. Now with that set, let’s start up SQL Developer. Check the Connection Context menu again Bingo! What happened there? SQL Developer looks to see if it can find the OCI resources. Guess where it looks? That’s right, the PATH. If it finds what it’s looking for, and confirms the bit level is right, it will activate the Reset Password option. We have a Preference to ‘force’ an OCI/THICK connection that gives you a few other edge case features, but you do not need to enable this to activate the Reset Password. Not necessary, but won’t hurt anything either. There are a few actual benefits to using OCI powered connections, but that’s beyond the scope of today’s blog post…to be continued. Ok, so we’re ready to go. Now, where was I again? Oh yeah, my password has expired… Right click on your connection and now choose ‘Reset Password’ You’ll need to know your existing password and select a new one that meets your databases’s security standards. I Need Another Option, This Ain’t Working! If you have another account in the database, you can use the DBA Panel to reset a user’s password, or of course you can spark up a SQL*Plus session and issue the ALTER USER JEFF IDENTIFIED BY _________; command – but you knew this already, yes? I need more help ‘installing’ the Instant Client, help! There are lots and lots of resources out there on this subject. But I also know from personal experience that many of you have problems getting this to ‘work.’ The key things to remember is to download the right bit level AND make sure the client install directory is in your path. I know many folks that will just ‘install’ the Instant Client directly to one of their ‘bin’ type directories. You can do that if you want, but I prefer the cleaner method. Of course if you lack admin privs to change the PATH variable, that might be your only option. Or you could do what the original ORA- message indicated and ‘contact your DBA.’

    Read the article

  • Integrating PHPBB With DokuWiki

    - by Christofian
    I have a site with a phpbb forum. I'm working on adding a dokuwiki wiki to the site, and I would like to integrate the two. The phpbb forum is running phpbb 3.0.8, and the dokuwiki installation is running 2011-05-25a "Rincewind". Basically what I want is: for users to be able to log into the dokuwiki wiki with their phpbb account. Users should only be able to register from the phpbb registration interface. Cookie integration would be nice, but is not required. Is there any way to do this?

    Read the article

  • How can I prevent [flush-8:16] and [jbd2/sdb2-8] from causing GUI unresponsiveness?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Details $ mount | grep ^/dev /dev/sda1 on / type ext4 (rw,noatime,discard,errors=remount-ro,commit=0) /dev/sdb2 on /home type ext4 (rw,commit=0) /dev/sda is an OCZ-VERTEX2 and /dev/sdb is a WD10EARS. Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • What is this code?

    - by Aerovistae
    This is from the Evolution of a Programmer "joke", at the "Master Programmer" level. It seems to be C++, but I don't know what all this bloated extra stuff is, nor did any Google searches turn up anything except the joke I took it from. Can anyone tell me more about what I'm reading here? [ uuid(2573F8F4-CFEE-101A-9A9F-00AA00342820) ] library LHello { // bring in the master library importlib("actimp.tlb"); importlib("actexp.tlb"); // bring in my interfaces #include "pshlo.idl" [ uuid(2573F8F5-CFEE-101A-9A9F-00AA00342820) ] cotype THello { interface IHello; interface IPersistFile; }; }; [ exe, uuid(2573F890-CFEE-101A-9A9F-00AA00342820) ] module CHelloLib { // some code related header files importheader(<windows.h>); importheader(<ole2.h>); importheader(<except.hxx>); importheader("pshlo.h"); importheader("shlo.hxx"); importheader("mycls.hxx"); // needed typelibs importlib("actimp.tlb"); importlib("actexp.tlb"); importlib("thlo.tlb"); [ uuid(2573F891-CFEE-101A-9A9F-00AA00342820), aggregatable ] coclass CHello { cotype THello; }; }; #include "ipfix.hxx" extern HANDLE hEvent; class CHello : public CHelloBase { public: IPFIX(CLSID_CHello); CHello(IUnknown *pUnk); ~CHello(); HRESULT __stdcall PrintSz(LPWSTR pwszString); private: static int cObjRef; }; #include <windows.h> #include <ole2.h> #include <stdio.h> #include <stdlib.h> #include "thlo.h" #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" int CHello:cObjRef = 0; CHello::CHello(IUnknown *pUnk) : CHelloBase(pUnk) { cObjRef++; return; } HRESULT __stdcall CHello::PrintSz(LPWSTR pwszString) { printf("%ws\n", pwszString); return(ResultFromScode(S_OK)); } CHello::~CHello(void) { // when the object count goes to zero, stop the server cObjRef--; if( cObjRef == 0 ) PulseEvent(hEvent); return; } #include <windows.h> #include <ole2.h> #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" HANDLE hEvent; int _cdecl main( int argc, char * argv[] ) { ULONG ulRef; DWORD dwRegistration; CHelloCF *pCF = new CHelloCF(); hEvent = CreateEvent(NULL, FALSE, FALSE, NULL); // Initialize the OLE libraries CoInitiali, NULL); // Initialize the OLE libraries CoInitializeEx(NULL, COINIT_MULTITHREADED); CoRegisterClassObject(CLSID_CHello, pCF, CLSCTX_LOCAL_SERVER, REGCLS_MULTIPLEUSE, &dwRegistration); // wait on an event to stop WaitForSingleObject(hEvent, INFINITE); // revoke and release the class object CoRevokeClassObject(dwRegistration); ulRef = pCF->Release(); // Tell OLE we are going away. CoUninitialize(); return(0); } extern CLSID CLSID_CHello; extern UUID LIBID_CHelloLib; CLSID CLSID_CHello = { /* 2573F891-CFEE-101A-9A9F-00AA00342820 */ 0x2573F891, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; UUID LIBID_CHelloLib = { /* 2573F890-CFEE-101A-9A9F-00AA00342820 */ 0x2573F890, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; #include <windows.h> #include <ole2.h> #include <stdlib.h> #include <string.h> #include <stdio.h> #include "pshlo.h" #include "shlo.hxx" #include "clsid.h" int _cdecl main( int argc, char * argv[] ) { HRESULT hRslt; IHello *pHello; ULONG ulCnt; IMoniker * pmk; WCHAR wcsT[_MAX_PATH]; WCHAR wcsPath[2 * _MAX_PATH]; // get object path wcsPath[0] = '\0'; wcsT[0] = '\0'; if( argc > 1) { mbstowcs(wcsPath, argv[1], strlen(argv[1]) + 1); wcsupr(wcsPath); } else { fprintf(stderr, "Object path must be specified\n"); return(1); } // get print string if(argc > 2) mbstowcs(wcsT, argv[2], strlen(argv[2]) + 1); else wcscpy(wcsT, L"Hello World"); printf("Linking to object %ws\n", wcsPath); printf("Text String %ws\n", wcsT); // Initialize the OLE libraries hRslt = CoInitializeEx(NULL, COINIT_MULTITHREADED); if(SUCCEEDED(hRslt)) { hRslt = CreateFileMoniker(wcsPath, &pmk); if(SUCCEEDED(hRslt)) hRslt = BindMoniker(pmk, 0, IID_IHello, (void **)&pHello); if(SUCCEEDED(hRslt)) { // print a string out pHello->PrintSz(wcsT); Sleep(2000); ulCnt = pHello->Release(); } else printf("Failure to connect, status: %lx", hRslt); // Tell OLE we are going away. CoUninitialize(); } return(0); }

    Read the article

  • PySide 1.0.0 beta 2, le support complet des interfaces déclaratives arrive dans ce bindind LGPL Python de Qt

    Voici donc sortie la deuxième beta de PySide, le binding Python de Qt initié par Nokia, dont la principale différence avec le binding historique, PyQt, réside dans la licence : PySide est disponible sous LGPL, une licence moins restrictive que la GPL employée par PyQt. Ainsi, un binding Python de Qt peut être utilisé pour des développements propriétaires sans obligation de payer une licence commerciale. La première version beta de PySide (la bien dénommée beta 1) apportait un grand changement par rapport aux versions précédents (0.4.2 et avant) : un changement dans l'ABI (Application Binary Interface), ce qui, pour rester en dehors des détails techniques, obligeait à recompiler toute application se basant sur PySide (notamment le module Python). Cependant, ainsi, le projet ...

    Read the article

  • Microsoft Unity 2.0 Released

    - by shiju
    Microsoft's dependency injection framework Unity Application Block (Unity) has reached version 2.0. The release available on Codeplex at http://unity.codeplex.com/ Unity 2.0  Change LogRelease Notes - Unity 2 for DesktopRelease Notes - Unity 2 for SilverlightOfficial Download - Unity 2.0Official Download - Unity 2.0 for SilverlightAdditional Materials DownloadMigration GuideVideos UnityContainer Fluent InterfaceUnity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement, as shown in the following code.  IUnityContainer container = new UnityContainer() .RegisterType<IFormsAuthentication, FormsAuthenticationService>() .RegisterType<IMembershipService, AccountMembershipService>() .RegisterInstance<MembershipProvider>(Membership.Provider) .RegisterType<IDinnerRepository, DinnerRepository>();    

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start lets first consider the following dummy database and entity class. public class Person { public virtual string Name { get; set; } public virtual int Age { get; set; } }   public interface IDataBase { T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>()....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Set up iis7.5 to deny connections outside of LAN for certain folder [migrated]

    - by Darkcat Studios
    Im setting up a combined website and extranet currently, they both read from the same database on the same server as the site is hosted on. The reason being that the website is fed from the data that the staff plug into the extranet interface. it also links in to AD for authorising access to the extranet. I have the extranet in a folder within the website folder. What I want to do is only allow the extranet to be accessed from computers within our LAN, but allow the main website to be freely accessible to internet users. I have it set up as a generic web server currently, so anyone can view anything (well up to the point where the user is asked to log into the extranet of course! I have read a lot on this but nothing I read applies to, or works in IIS7.5

    Read the article

  • Where do you search/look for game developers for an indie game startup?

    - by G.Campos
    Hey there I just recently saw stackoverflow had a game dev sister site so here I am, wondering if you experienced fellows know where one can search/look for game developers for an indie game startup? In other words: I have a game idea which I've written down with as much detail as possible (so anyone else can understand how it works) and now I'm looking for a heavy php programmer with whom to pair up in order to go from idea to reality. I'm a front-end/interface designer and an intermediate programmer. I recognize my project requires heavy programming skills which I do not have as of today =) So, what websites, communities or places do you recommend I go look into? Where do good programmers interested in indie games go look for projects if they don't have their own? Thanks in advance G.Campos

    Read the article

  • links for 2010-05-26

    - by Bob Rhubart
    @vambenempe - Dear Cloud API, your fault line is showing "I am talking about the dreadful state of fault reporting in remote APIs, from Twitter to Cloud interfaces. They are badly described in the interface documentation and the implementations often don’t even conform to what little is documented." -- William Vambenempe (tags: oracle otn cloud) @oraclebase: Consuming Web Services using PL/SQL Oracle ACE Director Tim Hall shares a couple of solutions for consuming web services using PL/SQL. (tags: oracle otn oracleace soa sql webservices) Douwe Pieter van den Bos: IT Project misstep: To Serve and Protect "Thoughts and vision change during time. We gain new insights and other people share their knowledge. This is exactly why software development projects need to be based on a change facilitating manner, not trying to avoid change, or make it more difficult." -- Douwe Pieter van den Bos (tags: oracle otn architect projectmanagement innovation)

    Read the article

  • Amazon SOA: database as a Service

    - by Martin Lee
    There is an interesting interview with Werner Vogels which is partly about how Amazon does Service Oriented Architecture: For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services. I do not understand that. Why do they need to 'wrap' a database into some layer if it already can be consumed as a service by other service through database adaptors? Does Amazon do that just because they need to expose the database to third parties or because of anything else? Why "no direct database access is allowed"? What are the advantages of such an architectural decision?

    Read the article

  • Cannot ping static ip on eth1

    - by Calvin Froedge
    I am trying to ping the network interface I have set up for eth1. This is my config: auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.1 broadcast 192.168.1.255 If I ping 192.168.1.2, I get: Ping 192.168.1.2 (192.168.1.2) 56(84) bytes of data. From 192.168.1.3 icmp_seq=3 Destination Host Unreachable Results of ifconfig tell me that the IPv4 address is 192.168.1.3. I can ping this ip. Bcast and Mask are as expected (same as in definition). I can ping 192.168.1.3 from my macbook. I cannot ping 192.168.1.2 locally or from my macbook. Any ideas why?

    Read the article

  • Ask the Readers: Favorite Deal App?

    - by Jason Fitzpatrick
    Black Friday is mere days away and the holiday shopping season is upon us. This week we want to hear about your favorite deals apps—how do you find the best deals on the go? The parameters for this week’s Ask the Readers question are pretty straight forward: we want to hear about your favorite deal finding/deal comparison smart phone app. Whether it’s for Android, iOS, or a web site with a very cleanly formatted mobile interface, we want to hear about how you score serious deals out in the field. How do you know if the deal you see in the store is the best deal? Sound off in the comments with your favorite deal app and then check in on Friday for the What You Said roundup. How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >