Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 378/903 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Quickly and Automatically Restart a Windows Program When it Crashes

    - by Lori Kaufman
    We’ve all had programs crash on us in Windows at one time or another. You can take the time to manually start the program again, or you can have a simple program like ReStartMe restart it automatically for you. ReStartMe is a free program that has one purpose in life, to restart processes. You tell it to watch specific processes and if any of those processes exit, whether they crashed or you accidentally closed them, ReStartMe will automatically restart them. To install the program, double-click on the restartmeinstaller.exe file you downloaded (see the link at the end of the article). Follow the easy installation process, accepting the default settings. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • Terminal Errors/Package Erros

    - by Bryan
    After running some updates a week ago or so, I've noticed that after any installations via the Ubuntu Software Center or any installed in the GUI have given an error message stating that there was a package error. Also any installations done through the terminal have given the E: Sub-process /usr/bin/dpkg returned an error code (1) message at the end of each install. Oddly, the software is still installed and seems to function properly. Is there any way to get rid of this? I've tried to run apt-get autoclean as recommended on another site, but this doesn't seem to work. I'm fairly new to Linux/Ubuntu, just FYI.

    Read the article

  • New AutoVue Movies Available at the Oracle AutoVue Channel!

    - by Gerald Fauteux
    There are 4 new movies available at the Oracle AutoVue Channel. Three of these latest AutoVue movies demonstrate how AutoVue can be used in various processes, in the Electronic and High tech  sector. The fourth shows how AutoVue can be used on an iPad using Oracle Virtual Desktop Infrastructure (OVDI) They are: Improving the Design Process with AutoVue in the Electronics & High Tech Industry  Watch it now (7:17)  Improving Manufacturing and Assembly with AutoVue in the Electronics & High Tech Industry Watch it now (7:55)  Improving Supply Chain Management with AutoVue in the Electronics & High Tech Industry Watch it now (4:42)  Mobile Asset Management on the iPad With AutoVue and Oracle Virtual Desktop Infrastructure (OVDI) Watch it now (3:52)  See all the Movies available at the Oracle AutoVue Channel!

    Read the article

  • Google I/O 2012 - Native Client LIVE

    Google I/O 2012 - Native Client LIVE Colton McAnlis, Noel Allen In this talk, we will be porting an application to Native Client in 60 minutes, LIVE; showing the power of what Native Client can provide for traditional C++ developers looking to move to the web. In the porting process we'll cover specific tasks that a developer would need to perform during a port, and how to to address them with new tools and technologies including debugging integration with Visual Studio and a set of newly added utility libraries to the SDK. Attendees to this session will walk away with a clear understanding of what's required to port their applications to Native Client so that they can start their own projects For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 48:21 More in Science & Technology

    Read the article

  • Do you develop with security in mind?

    - by MattyD
    I was listening to a podcast on Security Now and they mentioned about how a lot of the of the security problems found in Flash were because when flash was first developed it wasdn't built with security in mind because it didn't need to thus flash has major security flaws in its design etc. I know best practices state that you should build secure first etc. Some people or companies don't always follow 'best practice'... My question is do you develop to be secure or do you build with all the desired functionality etc then alter the code to be secure (Whatever the project maybe) (I realise that this question could be a possible duplicate of Do you actively think about security when coding? but it is different in the fact of actually process of building the software/application and design of said software/application)

    Read the article

  • Video: Telerik Silverlight Chart showing live data from SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). In this video, I demonstrate  - The process of writing, authoring, deploying, configuring, and debugging a Custom WCF service in SharePoint 2010 Integrating it with the Telerik Silverlight RAD Chart, that shows live data from the server showing CPU Usage of your web front end – can be enhanced to show whatever else you want. Doing all this in Visual Studio 2010, how you’d put your project together, how do you go about diagnosing it, debugging it – the whole bit. The whole presentation is about 45 mins, and it’s mostly all code, so plenty of juicy stuff here! At the end of this, you have a pretty sexy app running .. just fast forward to the end of the video below, and you’ll see what I’m talking about. :) You can watch the video here Comment on the article ....

    Read the article

  • Professional iOS Development as a Backup Career [closed]

    - by New Coder
    I am a research chemist by day and I am a self-taught hobbyist iOS programmer by night. I am in the process of developing a moderately complex iOS app and hope to launch it within a month or two. I love everything about iOS development (and programming in general). I want to know if iOS development could become a backup career for me if I loose my job. My question: Let's say I had a couple of apps in the app store, a solid foundation in objective-C and the apple frameworks and basic knowledge on network integrated apps. Without a formal CS degree, what other experience/knowledge would I need to land a job as a professional iOS developer? Forgive me if this question is out of bounds for this forum. If it is, suggestions on where to post such a question would be appreciated.

    Read the article

  • Reverse Search Images Easily with the TinEye Client for Windows

    - by Asian Angel
    Are you a frequent user of TinEye and would like to integrate it into your favorite Windows system? Then get ready to enjoy Context Menu and App Window goodness with the TinEye Client for Windows. After you have downloaded the zip file, unzip it and run the setup file inside. Once the installation process has finished you will be asked if you would like to launch TinEye Client immediately or not. If not then you can access it later using the new shortcut added to the Start Menu. We chose to let the program launch automatically…this is what the main window looks like. For our test we decided to access the client via the Context Menu using a picture of Doc Brown’s DeLorean in hover conversion mode. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • DotNetNuke 5.3.1 Version now entering QA.

    With any software product you will occasionally have a release that you wish you could take back. Microsoft had Windows Vista, and Tuesday we had DotNetNuke 5.3.0. We had a couple of significant bugs which slipped through the QA process and which resulted in a major impact to customers. Rather than continue to compound the problem we have made the decision to pull the 5.3.0 packages from CodePlex and from DotNetNuke Support Network while we test a 5.3.1 release which we expect to release early next...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tab Sweep - Jazoon aftermath, PaaS press, REST +WebSocket, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: •The GlassFish Tale - Oracle Scene (Markus) • Notes from Jazoon 2011 (Marek) • Jazoon '11 presentations (Jazoon.com) • Enterprise Java upgrade geared to PaaS clouds (TechCentral.ie) • JavaOne 2011: Content review process and Tips for submissions (Arun) • REST + WebSocket applications? Why not using the Atmosphere Framework (Jeanfrancois) • Get your Java 7 screensaver! (Duke)

    Read the article

  • 3D transformations in WPF & DirectX/Direct3D or OpenGL

    - by user2723417
    I need your help with 3D transformations. I have a sphere and I want to deform it by a mouse click or a mouse move. I want to make a furrow or to bite off a piece of the sphere without any breaks of 3D material. It is possible in WPF, but if the quantity of 3D points is more then 25 000, it creates some freezes in a dynamic mode (animation breaks), because the object of MeshGeometry3D class should be reconstructed every time to stop the breaks of 3D material. Give me advice about tools for the realization of my task. Maybe it can be done with the help of DirectX/Direct3D or OpenGL? I am a newcomer in these collections of APIs, but I would like to study them. I need to integrate the process of transformation in WPF application.

    Read the article

  • what are the benefits of closure, primarily for PHP?

    - by Patrick
    I am beginning the process of moving code over to PHP 5.3 and one of the most highly touted features of PHP 5.3 is the ability to use closures. My understanding of closures is that they allow anonymous functions, can be assigned to variable names, and have interesting scoping abilities. From my point of view the only seeming benefits in real world applications is the reduction of clutter in the namespace because closures are anonymous. Am I wrong in this? Should I be trying to put closures wherever I code? EDIT: I have already read this post on Javascript closures.

    Read the article

  • How do I install an older 2.6.37 Kernel Version?

    - by Seyed Mohammad
    I have a Sony VAIO P netbook and for several issues (graphics driver, audio driver and power management), I want to install an older version of the Linux kernel on Ubuntu 11.10 (actually its Xubuntu) that seems to be much more suitable. So I searched for Ubuntu kernels and found this link which includes all versions of the Linux kernel distributed by Ubuntu. I am looking for a version before 2.6.38 (to escape the known power management issue) and of course solve my many driver problems! I guess my best bet is 2.6.37 but there are several 2.6.37.x-x kernels! Can someone point me to the right choice? In each folder (for example: this one) there are several DEB packages. Which packages should I install? (Note: I have a 32-bit system) What is the installation process? sudo dpkg -i *.deb ? Is this fine or additional steps are required? Thanks.

    Read the article

  • Display problem with fresh install of 12.04

    - by Dan
    Just intalled Ubuntu 12.04 from CD and install went with no problems. After rebooting, I get the initial purple screen and then a black screen with mouse pointer and a few stray pixels at the bottom left of screen. Occasionally during the boot process, the purple screen comes back momentarily but then back to the black screen with the mouse pointer. When I finally give up and press the power button, the purple screen returns with the Shut Down box visible as it is shutting down. Any ideas? I have tried adding nomodeset after quiet splash, but no change. Possibly not doing it correctly, since I am somewhat of a newbie to linux. Thanks! Dan

    Read the article

  • How To Enlarge a Virtual Machine’s Disk in VirtualBox or VMware

    - by Chris Hoffman
    When you create a virtual hard disk in VirtualBox or VMware, you specify a maximum disk size. If you want more space on your virtual machine’s hard disk later, you’ll have to enlarge the virtual hard disk and partition. Note that you may want to back up your virtual hard disk file before performing these operations – there’s always a chance something can go wrong, so it’s always good to have backups. However, the process worked fine for us. Image Credit: flickrsven How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Provisioning Videos

    - by Owen Allen
    There are a couple of new videos up on the Oracle Learning Youtube channel about Ops Center's provisioning capabilities. Simon Hayler does a walkthrough of a couple of different procedures. The first video shows you how to provision Oracle Solaris zones. It explains how to create an Oracle Solaris Zone profile, and then how to apply it (using a deployment plan) to a target system. The second video shows you how to provision an x86 server with Oracle Solaris. This uses a very similar process - you create a OS provisioning profile, then use a deployment plan to apply it to the target hardware. The documentation goes over OS provisioning and zone creation in the Feature Guide, if you're looking for additional information.

    Read the article

  • Article about Sun ZFS Storage Appliances

    - by Owen Allen
    Sun ZFS Storage Appliances are versatile storage systems. Discovering and managing them in Ops Center, though, makes them even more versatile. If you discover a Sun ZFS Storage Appliance in Ops Center 12c, you can create iSCSI and Fibre Channel LUNS, and make the LUNs available to server pools and virtualization hosts as a storage library. Barbara Higgins has written an excellent article that walks you through the process of setting up a Sun ZFS Storage Appliance and discovering and managing it in Ops Center. If you're looking into ways to make a Sun ZFS Storage Appliance work for you, it's worth a look.

    Read the article

  • Oracle E-Business Suite 12 Certified on Oracle Linux 6 (x86-64)

    - by John Abraham
    Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on 64-bit Oracle Linux 6 with the Unbreakable Enterprise Kernel (UEK). New installations of the E-Business Suite R12 on this OS require version 12.1.1 or higher. Cloning of existing 12.1 Linux environments to this new OS is also certified using the standard Rapid Clone process. There are specific requirements to upgrade technology components such as the Oracle Database (to 11.2.0.3) and Fusion Middleware as necessary for use on Oracle Linux 6. These and other requirements are noted in the Installation and Upgrade Notes (IUN) below. Certification for other Linux distros still underway Certifications of Release 12 with 32-bit Oracle Linux 6, 32-bit and 64-bit Red Hat Enterprise Linux (RHEL) 6 and the Red Hat default kernel are in progress. References Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 (My Oracle Support Document 761566.1) Cloning Oracle Applications Release 12 with Rapid Clone (My Oracle Support Document 406982.1) Interoperability Notes Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (My Oracle Support Document 1058763.1) Oracle Linux website

    Read the article

  • How do I pause and resume apt-fast package download?

    - by jasoncruz98
    I know that in order to speed up apt-get downloads, I can use apt-fast (which uses the aria2c or axel engine - it depends on which one I install during the configuration). But even though it says it can pause and resume downloads, I don't know how to do it, and I can't find any answer online that tells me how to do it. I have no intention of pausing apt-fast update function, I just want the ability to pause the sudo apt-fast install package_name function and resume the downloading of a package in Ubuntu at will using apt-fast (with axel or aria2c) I have seen in some forums that sudo apt-fast update cannot be paused because it requires starting the entire process. Please correct me if I'm wrong. Any help would be much appreciated.

    Read the article

  • Ubuntu Desktop Password

    - by doug
    I inherited a machine with Ubuntu desktop installed. It has a password in place and I have no idea what the password may be. I cannot get to the command line to use the methods I have found online. No matter how many times I press "Shift" during the boot process it still goes all the way to the desktop login. I never see grub. I am not sure which version I have but I think may be 9 or 10. Thanks Doug

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • How To Peel Garlic In Quick & Easy Way

    - by Gopinath
    Garlic is very common ingredient used in cooking in many parts of the world. In India it’s an undeniable ingredient in almost all the food items that are made using masala. So every cook of Indian kitchen knows the pain of peeling garlic. It’s a messy and time consuming process to peel of all the dead skin layers to get the tasty cloves. Cooking web site Saveur shows us as easy way to peel an entire garlic in less than 10 seconds using just two bowls.  No knifes, no scissor or any other instruments. Check the embedded video   I’ve not yet tried this trick at home, but looks like very easy one. What do you say? via Lifehacker (thanks vijay). cc image credit: flickr/lightlady

    Read the article

  • How to fix Jdeveloper 11.1.1.2 Hang

    - by nestor.reyes
    Is Jdeveloper hanging on you when use the XPATH expression Builder? Have a look at the Release notes for 11.1.1.2. This will relieve a lot of frustration.http://download.oracle.com/docs/cd/E15523_01/doc.1111/e14770/bpel.htm#BABECHBF16.1.6 Oracle JDeveloper May Hang When Using the Expression Builder Using the Expression Builder to build XPath expressions may cause Oracle JDeveloper to hang. If that happens, perform the following steps: Kill the Oracle JDeveloper process. Restart Oracle JDeveloper. Select Tools > Preferences > SOA, and deselect the Validate Expression checkbox. After performing these steps, Oracle JDeveloper should no longer hang.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >