Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 221/523 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Kuppinger Cole Paper on Entitlements Server

    - by Naresh Persaud
    Kuppinger Cole recently released a paper discussing external authorization describing how organizations can "future proof" their enterprise security by deploying Oracle Entitlements Server.  By taking a declarative security approach, security policy can be flexible and distributed across multiple applications consistently. You can get a copy of the report here. In fact Oracle Entitlements Server is being used in many places to secure data and sensitive business transactions. The paper covers the major  use cases for Entitlements Server as well as Kuppinger Cole's assessment of the market. Here are some additional resources that reinforce the cases discussed in the paper. Today applications for cloud and mobile applications can utilize RESTful interfaces. Click on this link to learn how. OES can also be used to secure data in Oracle Databases.   To learn more check out the new Oracle U  OES 11g course.

    Read the article

  • Winetricks fails to find program files directory

    - by EgyptLovesUbuntu
    I installed a fresh copy of Ubuntu 12 desktop then: Installed WINE from the Ubuntu Software Center. Installed WineTricks from the Ubuntu Software Center. When I type the following commands in the terminal: sudo winetricks dotnet40 I get this error message: wine cmd.exe /c echo '%ProgramFiles%' returned empty string If i try the command without sudo winetricks dotnet40 The output is as follows Executing w_do_call dotnet40 Executing load_dotnet40 ------------------------------------------------------ dotnet40 does not yet fully work or install on wine. Caveat emptor. ------------------------------------------------------ Executing mkdir -p /home/vectoruser/.cache/winetricks/dotnet40 mkdir: cannot create directory `/home/vectoruser/.cache/winetricks/dotnet40': Permission denied ------------------------------------------------------ Note: command 'mkdir -p /home/vectoruser/.cache/winetricks/dotnet40' returned status 1. Aborting. ------------------------------------------------------ My current user is vectoruser which i use to logon to Ubuntu The output of ls -ld /home/vectoruser /home/vectoruser/.cache /home/vectoruser/.cache/winetricks Gives: drwxr-xr-x 32 vectoruser vectoruser 4096 Aug 2 19:26 /home/vectoruser drwx------ 19 vectoruser vectoruser 4096 Aug 2 19:25 /home/vectoruser/.cache drwxr-xr-x 2 root root 4096 Aug 2 18:09 /home/vectoruser/.cache/winetricks

    Read the article

  • Visual Studio 2010 tip: Cut empty lines

    - by koevoeter
    How many times you wanted to move 2 lines by cut and pasting them, but the line you cut last is actually a blank line and your actual code is removed from the clipboard? Visual Studio 2010 has an option that keeps cutting blank lines from overwriting the clipboard. Go and uncheck this one: Tools » Options » Text Editor » All Languages » General » Apply Cut or Copy commands to blank lines when there is no selection Extra (related) tip The (free) Visual Studio 2010 extension Visual Studio 2010 Pro Power Tools contains (apart from a bunch of other handy features) the commands Edit.MoveLineUp and Edit.MoveLineDown to do whatever they say they do and maps them automatically to keyboard shortcuts Alt+Up & Alt+Down. Resharper (not-free) has similar commands for moving lines, by default mapped to Ctrl+Alt+Shift+Up/Down.

    Read the article

  • Ubuntu 11.10 on MacBook Air 2011 - problems with drivers

    - by RankoR
    I had just installed Ubuntu 11.10 on my MacBook Air, and have a lot of problems. I used this tutorial, after executing script and rebooting some problems were solved, but now I have completely not working touchpad, keyboard & screen backlight works incorrect (screen brightness is not 100%, but it is 100% in settings, keyboard backlight turns on/off randomly). How to fix it? P. S. Script said that it's no such file or directory for some .icc file. Google says, that I should copy it from Mac OS X drive, but I completely replaced it with Ubuntu.

    Read the article

  • Great Silverlight User Group meeting last night - Thanks Joel!

    - by Dave Campbell
    Last night's Silverlight User Group meeting in Phoenix went really well. We had about 15 in attendance, and everyone seemed engaged with Joel Neubeck's great Windows Phone 7 presentation. When it was over, we gave away a couple copies of Windows 7 Ultimate, one copy of the Expression Suite, an Arc Mouse, a web cam, a bunch of books, other assorted software and some TShirts.  All-in-all I think it was a good time had by all. Thanks to Joel Neubeck for the time and presentation and to Joe's mom for the babysitting! See you all next month.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Farseer Physics Engine and the Ms-PL License

    - by Stephen Tierney
    Am I able to produce code for a game which uses the Farseer engine and release my code under an open source license other than the Ms-PL? My concern is with the following section from the license: If you distribute any portion of the software in source code form, you may do so only under this license by including a complete copy of this license with your distribution. If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license. If I do not include Farseer in my source code distribution does this give me an exemption from this clause as I am not distributing the software? My code merely uses its functions. No where in the license does it force you to provide source code for derivative works or linking works, it simply gives you the option of "if you distribute".

    Read the article

  • how to do database updates in each release

    - by Manoj R
    Our application uses database (mostly Oracle), and database is at the core. Each customer has its own database, with its own copy of application. Now with each new release of our product, we also need to update the database schema. These changes are adding new tables, removing columns, manipulating data etc. How do the people handle this? Are there any standard processes for this? EDIT:- The main issue is the databases are huge with many tables and more of huge amount of data. We provide the scripts and some utilities to manipulate the data. How to handle the failures and false negatives? More of looking for this kind articles. http://thedailywtf.com/Articles/Database-Changes-Done-Right.aspx

    Read the article

  • Record Screen Activity with CamStudio

    - by Asian Angel
    Sometimes a visual demonstration works much better than a list of instructions. If you need to make a demo video for family and/or friends then you might want to have a look at CamStudio. Using CamStudio To get properly set up you will need to install two different files (the main program followed by the codec). Once that is done you are ready to get started. When you start the program you will see a surprisingly small window. Notice the highlighted Record to text…it serves as a visual indicator for the video type selected for recording. Before you start creating a video it would be a good idea to look through some of the settings. The first one to look at is the region or area that you want to record. Next you will want to look through the video options since these will affect the quality and final size of your video files. The default setting for quality is 70…adjust that to the level that best suits your needs. Note: For our example we maxed out the various video settings for best quality. On our system Microsoft Video 1 was listed as the default compressor but as you can see there were other options available. You can configure the settings for the compressor you want to use if desired. Keep in mind that each compressor will have unique settings of their own, so if you change it, be certain to go back and check. We decided to use the CamStudio Lossless Codec for our example (it gave the best results while trying the software). Going back to the main window you can toggle back and forth between .avi and .swf output using the last button. Once you are satisfied with the settings click on the red record button to start. If you need to pause while recording or stop recording click on the system tray icon and select the appropriate command. When you are finished recording, you will be presented with the save file window. Browse for the desired save location and name your new file. Once you have saved the file the movie player window will automatically open so that you view your new video. Our sample video shown here is at 50% of original size so may look slightly “gritty”. The detail was much better at 100%. If you decide to record and save as .swf the process will be identical to recording in .avi format until the movie player window opens. At that time the conversion process from .avi to .swf will begin. When complete you will have a new flash video and html file that goes with it. Depending on which browser you have set as default, you may run into a small problem when the preview for your new .swf file tries to open. There is a small bug in the generated html file. You can use this work-around or… Just open the .swf file directly in your favorite browser. Conclusion CamStudio may not produce the highest quality videos, but it’s free and does a very nice job nonetheless. If you are working on a tight budget or only need to make an occasional video then CamStudio is a very sensible choice. Links Download CamStudio Stable Version & CamStudio Codec *Download links are approximately half-way down the page. Download CamStudio Stable Version & CamStudio Codec at SourceForge *Beta version also available here. Similar Articles Productive Geek Tips Get the Classic Style Network Activity Indicator Back in Windows 7How To Copy a DVD with VLC 1.0ALLCapture 3.0 [Review]Listen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer Radio TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate

    Read the article

  • Updated Google Search for iPad Rocks Side-by-Side Search and More

    - by Jason Fitzpatrick
    iPad: If Google is your search engine of choice and you do some serious searching from your iPad, you’ll want to grab a free copy of Google’s radically updated iPad search app. What’s new with Google iPad Search? This version of the app sports Google Instant, coverflow style image search, enhanced voice search, Google+ integration, and overall better integration with Google’s services. Our favorite feature, by far, is the enhanced side-by-side search. You can pull up search results and simultaneously look at a page–watch the video above to see it in action. The New Google Search App for iPad [Google Mobile Blog] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • [News] Microsoft annonce des offres sp?ciales autour de VS 2010

    Par la voix de Somasegar, Microsoft vient d'annoncer les promotions qui accompagneront la distribution de Visual Studio 2010 au d?tail. Tout acqu?reur d'une licence Visual Studio 2005 ou 2008 en version standard se verra offrir une version professionnelle de VS 2010 avec en plus un abonnement MSDN Essentials d'un an. "Today, we?re also unveiling an offer for customers who purchase Visual Studio Professional at retail. To help these developers fully realize the power and benefits of a MSDN subscription, I am announcing MSDN Essentials, a one-year trial MSDN subscription that will be included with every retail copy of Visual Studio Professional sold. ". Il est ?galement possible de commander VS 2010 en avance de phase ...

    Read the article

  • PhpStorm theme installation

    - by Chelios
    I want to install PhpStorm color scheme from XML. Official PhpStorm's IDE website says To install a theme, download the corresponging xml file, copy to ~/Library/Preferences/RubyMine10/colors/ (Mac OS X) or C:\Users\%username%.RubyMine10\config\colors\ (Windows) and restart RubyMine. Then go to IDE Settings | Editor | Colors and select your theme. However, nothing is said about Ubuntu. I tried to find anything similar to "Library" folder inside of PhpStorm folder but failed. How can I install the theme?

    Read the article

  • SharePoint 2010 Information Worker VM available for download

    - by Enrique Lima
    If you interested in a test drive of the technologies around the Wave 14 launch, take look at the VM made available from Microsoft. It is a very well rounded option to explore the new products. 2010 Information Worker Demonstration and Evaluation Virtual Machine Note:  It is important to understand you will need a system with Hyper-V to import this VM and get it off and working.  Also, make sure you keep a copy of the original unpacked VM as this is based on a trial version of the OS (time bombed) and there is a chance to rearm the VM, but you are better off either keeping the original files or taking a snapshot as soon as the VM is living in your Hyper-V environment.

    Read the article

  • How to install gnome shell extensions offline?

    - by nosklo
    I know how to go to the https://extensions.gnome.org/ website and download gnome-shell extensions, but now I need to install some extensions available there on a computer without any internet access at all. It is in a internal corporate network and there's no way I can get outside internet access on it, so I must find another way. I can copy files in a usb disk. At my home computer, I have found my extensions at ~/.local/share/gnome-shell/extensions/ but just copying this folder to the target corporate computer didn't do the trick. Running gnome-tweak-tool gives me a "Install Shell Extension" button but I don't know how to download an extension in a format acceptable to install using this button. I have tried to point to the folder above but it didn't work either. What do I need to do?

    Read the article

  • Permissions problem when copying files to /usr/share/tomcat6

    - by Nazar Hussain S
    Hi, I am running springsource framework in ubuntu 10.01. In my home folder, I have installed springsource IDE. I have my tomcat6 appserver in the /usr/share/tomcat6. While executing a sample project springapp, I have created the springapp dir in /users/share/tomcat6/webapps/ folder using sudo as I am unable to do it directly. On running the ant deploy or ant deploywar command, I am unable to copy the sample file -index.jsp from my workspace in springsource IDE to springapp dir in /usr/share/tomcat6/webapps as I am getting the error permission denied while copying the .jsp file. Can anybody please provide suggestion as to how to overcome this issue? Regards

    Read the article

  • Make sure you take advantage of Visual Studio 2010 license offers before April 12th

    - by Aaron Kowall
    For Visual Studio 2010 Professional Microsoft Visual Studio 2010 Professional will launch on April 12 but you can beat the rush and secure your copy today by pre-ordering at the affordable estimated retail price of $549, a saving of $250. If you use a previous version of Visual Studio or any other development tool then you are eligible for this upgrade. Along with all the great new features in Visual Studio 2010 (see www.microsoft.com/visualstudio) Visual Studio 2010 Professional includes a 12-month MSDN Essentials subscription which gives you access to core Microsoft platforms: Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, and Microsoft SQL Server 2008 R2 Datacenter. So visit http://www.microsoft.com/visualstudio/en-us/pre-order-visual-studio-2010 to check out all the new features and sign up for this great offer. Ultimate Offer for MSDN Subscribers Also, don’t forget about the Ultimate Offer which basically gives you the opportunity to use a higher end Visual Studio SKU for the duration of your MSDN agreement.  Make sure you review your MSDN license BEFORE APRIL 12th to make sure you are in the right spot to maximize this benefit. Technorati Tags: VS 2010

    Read the article

  • SPRoleAssignment Crazy Caviats

    - by MOSSLover
    I’m not sure if this bug exists on any other environment, but here are a few issues I ran into when trying to use SPRoleAssignment and SPGroup: When trying to use Web.Groups[“GroupName”] it basically told me the group did not exist, so I had to change the code to use Web.SiteGroups[“GroupName”]. I could not add the Role Assignment to the Web and run a Web.Update() without adding an additional Web.AllowUnsafeUpdates= true; , however on my virtual machine I could do a Web.Update() without the extra piece of code.  I kept receiving an error in my browser stating that I should hit the back button and update my permissions. So after fixing those two issues I was able to copy the permissions from a page item into a Site for my migration.  Hopefully, one of you can learn from my error messages if you have any issues in the future. Technorati Tags: SPRoleAssignment,MOSS,SPGroup

    Read the article

  • Apt-get 403 Forbidden, but accessible in the browser

    - by labarna
    I've noticed that running apt-get update recently has resulted in quite a few ppa's returning "403 Forbidden". In and effort to clean them up I had a look: W: Failed to fetch http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu/dists/raring/main/binary-amd64/Packages 403 Forbidden W: Failed to fetch http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu/dists/raring/main/binary-i386/Packages 403 Forbidden E: Some index files failed to download. They have been ignored, or old ones used instead. The strange things is, if I copy these URLs into my browser I can access the files just fine. Why would apt-get report "403 Forbidden" if they're still accessible? I tried re-adding the ppa through add-apt-repository which downloads the signing key again, and it still reported "403 Forbidden".

    Read the article

  • WidgetBlock Speeds Up Browsing by Removing Social Media Widgets

    - by Jason Fitzpatrick
    Chrome: If you’re tired of web pages cluttered with social media buttons, WidgetBlock bans the buttons and speeds up the load time of web pages in the process. Even on a snappy internet connection you’ve likely noticed, thanks to the deluge of social media buttons loading in the background, a noticeable lag on popular web sites. WidgetBlock blocks widgets from loading (just like popular ad blocking software blocks ads from loading). The above screenshot, taken from a popular media site, shows just how much screen real estate is taken up by social media widgets. Installing WidgetBlock banishes the social media widgets and speeds load time. Hit up the link below to grab a free copy. WidgetBlock [Chrome Web Store] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Save Web Content Directly to Google Drive in Chrome [Extension]

    - by Asian Angel
    Are you looking for a quick and easy way to save images, documents, and more directly to Google Drive while browsing? Then you may want to grab a copy of the ‘Save to Google Drive’ extension for Chrome. Once you have installed the extension it is very easy to start saving all that wonderful web content to your Google Drive account via the Context Menu or the Toolbar Button as seen in the screenshot above. One thing to keep in mind is that the first time you use the extension you will be asked for permission to access your account as seen in the screenshot below. Here is a quick look at the options currently available for the extension… Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Internal WordPress pages all 404 when using WAMP

    - by rlesko
    I have a problem when using WAMP while designing and coding my site. Well, I have a local version of Wordpress installed in WAMP and use it as a tool to see my changes when coding. I also have the same files uploaded to some free hosting. The problem is when I want to access for example http://localhost/contact. Browser gives me a 404 error. As I said, the same files (exact copy from my PC) are on the web here and when I go to http://thefalljourneyindia.iblogger.org/contact it opens the page fine (without the style of course because I haven't made one yet). Why is that and how do I get to see all of my pages locally like I can when the WP installation is online?

    Read the article

  • DIY Door Lock Grants Access via RFID

    - by Jason Fitzpatrick
    If you’re looking to lighten the load on your pocket and banish the jingling of keys, this RFID-key hack makes your front door keycard accessible–and even supports groups and user privileges. Steve, a DIYer and Hack A Day reader, was looking for a solution to a simple problem: he wanted to easily give his friends access to his home without having to copy lots of keys and bulk up their key rings. Since all his friends already carried a Boston public transit RFID card the least intrusive solution was to hack his front door to support RFID cards. His Arduino-based solution can store up to 50 RFID card identifiers, supports group-based access, and thanks to a little laser cutting and stain the project enclosure blends in with the Victorian styling of his home’s facade. Hit up the link below to see his code–for a closer look at the actual enclosure check out this photo gallery. RFID Front Door Lock [via Hack A Day] HTG Explains: What is DNS? How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU

    Read the article

  • So you want a French Site?

    - by juanlarios
    I thought I would write a quick write up of how to create a french site in SharePoint 2007. I'm not talking about a Variation but just a plain French Site from the ground up. There were some gotchas that I felt were worth blogging about. First:  go to Microsoft Telnet Article and follow the install instructions. Make sure that when you get to the download page that you select "French" as part of the drop down and you download and install the right language pack. I noticed that if you did not click the "change" button enven though I selected the 'french' language pack, it reverted back to the english language pack.   Second: You will notice a couple of things. When you go to central admin you will see the following:    Now you can pick between french site or english. You will get this if you install other language packs and they will be listed in the drop down. You will notice that you now have french headings and frech listings of sites. You see "Publishing" as a heading because I have a custom site definition that I deployed as a french site. Third: As you start navigating around and trying to create document libraries or sites you will start getting errors. Errors like the following: "Cannot make a cache safe URL for "SelectorControls.js", file not found. Please verify that the file exists under the layouts directory. " Troubleshoot issues with Windows SharePoint Services. Once you resolve the issue with this "js" file, you will find that there are other js files that are missing. The only problem is that if you are not fluent in French or the language you are trying to deploy, Well, you'll have a tough time understanding error messages as they will all be in the new language you are trying to deploy. So let's just talk about what happened when you installed the language pack. In the 12 Hive:  12/Template    you will now see a 1033 folder and a 1036 folder. The 1036 folder is the folder that was created and added as part of the language pack. What the above error is saying is that now that it's looking at the 1036 folder, well, it's missing some files. The nice thing is that these files are included in the 1033 folder (which is the English Language Pack). Simply copy and paste the controls from the one folder to the other. There will be more than one conflict so you will have to move serveral controls over. Can't remember how many but simply add them as error messages come up. I had to add some navigation controls and some content selectors.   Now that's all that you need to install the Frech Language pack anc reate site collections that are entirely in a another language. Do not mistake this with Variations, where you can have multiple language sites. For those of you doing a little bit extra with this, let me share what I was doing extra and what I needed to get it working for me. I had had a custom site definition which was obviously not showing up in my selection of french sites. I was under the impression that all sites in English would show up in french and that the sites were simply routed to a new Resource file for french content. And that is the case but there is a little extra that needs to be done if you have a custom site definition deployed:  First: Under hive 12/Template/1033/XML  there is a listing of site definition files that are deployed to the English side of things. If you navigate to 12/Template/1036/XML  and open one of the site definitions you will see that they are similar and reference the existing site definitions installed on the server, except that they have some french added to descriptions and names. Simply copy the xml file of your custom template to the 1036 folder to have it show up as a selection when you select French as the dropdown entry when create a site colleciton. You can go ahead and change the description and name to suit the language it's under.    Second: As part of my site definition, I packaed up several list templates, that were saved as STP files. When you navigate to the list template listing, well, the templates are for English sites, not French so I cannot create document libraries based on the template. What now? well here comes KWIzCom to the rescue! They seem to have put out a "STP language converter" where you can take a site template or list template and convert it to any target language you are after. It's a free download, Use it and you're good to go.  One thing I will mention is that when I convereted the English documents I whent ahead and converted them to French-Canadien. And it didn't work! so I finally figured out that the French Version it was expecting in the french site was "French-France". Don't know why that is, it's just what needs to be done to get that working. When I did that, I was able to use the List templates that I created in the English site for the French Site.   Hope it helps , good luck!

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >