Search Results

Search found 29938 results on 1198 pages for 'version hunter'.

Page 28/1198 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • SQL Server PowerShell Provider follows the Version of PowerShell on the Host and other errata

    - by BuckWoody
    There may be some misunderstanding on how the PowerShell Provider for SQL Server works. I’ve written an article or two explaining that you can use PowerShell with SQL Server, without having the SQL Server 2008 (or higher) provider around. After all, PowerShell just uses .NET, and SQL Server “Server Management Objects” or SMO listen to that interface as well. In SQL Server 2008 and higher we created a “MiniShell” for PowerShell that gives you the ability to treat a SQL Server Instance as a drive (called a “Provider” or path or drive) and a few commands (called command-lets). Using these two simple constructs you can move around SQL Server quickly and work with the objects it holds. I read the other day where someone stated that we had “re-compiled” PowerShell, so that you would have version 1.0 from SQL Server and 2.0 on your new server. Not so! Drop to a SQLPS prompt and a PowerShell prompt and type this in each: $PSVersionTable They should return the same value. You can think of a MiniShell as simply a compiled “profile” that gives you those providers and command-lets automatically – that’s all. In fact, you can load the SMO libraries yourself without the SQL Server 2008 Provider anywhere in sight. I do this all the time, since the MiniShell also has other restrictions. Also remember that if you run a PowerShell script as a SQL Agent Job step type (in 2008 and higher) that you’re running under the context of the account that starts Agent – I think most folks know this, but it’s good to keep in mind. There’s a re-written section of Books Online that goes over working with this very nicely – also covers the question “How to I connect to another server using the SQL Server PowerShell Provider” (hint: It’s just CD) and “How do I load all the SMO stuff if I don’t want to use the Provider” and more. Be sure and check out the note at the bottom that explains the firewall exceptions you’ll need to enable to CD to that remote server. Here’s that link: http://msdn.microsoft.com/en-us/library/cc281947.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • New Version Demonstration VM BIC2g 2013-04 Partner Edition

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 This Oracle Business Intelligence Linux VM virtual appliance (“BIC2g”) was developed to support Oracle OBI & BI-Apps sales and Oracle partners in product demonstrations, training activities and POC activities. It is available on ftp.oracle.com (see the deployment guide and “BIC2g 2013-04 Partner Edition Readme” pdf from the link below) and is available for OPN member partners. This BIC2g image is based on OBIEE v. 11.1.1.7. with Essbase and Essbase Studio Server started when starting BI. It also contains: Updated BI-Apps Cross Functional Demo (date advanced from 2011 to 2013), including DAC 11.1.1.6.4, Informatica 9.0.1 and is configured for a load against EBS R12. Both the 7.9.6.3 rpd/catalog and the 7.9.6.4 rpd/catalog versions of BI-Apps are provided. Updated integrated Essbase - BI Apps - EBS Demo (date advanced from 2009 to 2013. Re-configured BI Apps Data Sets to remove VPD (simplification) and greatly improved performance. Note that this image is identical to Oracle’s internal BI demonstration image, except that Endeca has been removed pending Endeca latest version availability on OTN. Once it is available on OTN we will provide a replacement that contains Endeca. Some of the screen shots in the “Readme”.pdf shows Endeca, but it is not on this (2013-04) image. The FTP access details and password are shown at the bottom of the page @ BI Solutions Engineering Partner Portal. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • SVN: Working with branches using the same working copy

    - by uXuf
    We've just moved to SVN from CVS. We have a small team and everyone checks in code on the trunk and we have never ever used branches for development. We each have directories on a remote dev server with the codebase checked out. Each developer works on their own sandbox with an associated URL to pull up the app in a browser (something like the setup here: Trade-offs of local vs remote development workflows for a web development team). I've decided that for my current project, I'll use a branch because it would span multiple releases. I've already cut a branch out, but I am using the same directory as the one originally checked out (i.e. for the trunk). Since it's the same directory (or working copy) for both the branch and the trunk, if for e.g. a bug pops up in the app I switch to the trunk and commit the change there, and then switch back to my branch for my project development. My questions are: Is this a sane way to work with branches? Are there any pitfalls that I need to be aware of? What would be the optimal way to work with branches if separate working copies are out of the question? I haven't had issues yet as I have just started doing this way but all the tutorials/books/blog posts I have seen about branching with SVN imply working with different working copies (or perhaps I haven't come across an explanation of mixed working copies in plain English). I just don't want to be sorry three months down the road when its time to integrate the branch back to the trunk.

    Read the article

  • On improving commit practices

    - by greengit
    I was thinking about ways to improving my commit practices. Is there any co-relation between no. of source code lines and no. of commits? In a recent project that I was involved in, I was going at 30 commits per 1000 lines. One typical file from the project has these stats language: JavaScript total commits that include this file: 32 total lines: 1408 source lines: 1140 comment lines: 98 no. of function declarations: 28 other declarations: 8 Another file has these... Language: Python total commits that include this file: 17 total lines: 933 source lines: 730 comment lines: 80 classes: 1 methods: 10 I also think that no. of commits is more related to no. of features or no. of changes to the code and less to the no. of lines. The general git community motto is make short commits and commit often. So, do you really think about you commit strategy before you start the project. For that matter, is there anything like commit strategy? If so, what's yours?

    Read the article

  • Trade-offs of local vs remote development workflows for a web development team

    - by lamp_scaler
    We currently have SVN setup on a remote development server. Developers SSH into the server and develops on their sandbox environment on the server. Each one has a virtual host pointed to their sandbox so they can preview their changes via the web browser by connecting to developer-sandbox1.domain.com. This has worked well so far because the team is small and everyone uses computers with varying specs and OSs. I've heard some web shops are using a workflow that has the developers work off of a VM on their local machine and then finally push changes to the remote server that hosts SVN. The downside to this is that everyone will need to make sure their machine is powerful enough to run both the VM and all their development tools. This would also mean creating images that mirror the server environment (we use CentOS) and have them install it into their VMs. And this would mean creating new images every time there is an update to the server environment. What are some other trade-offs? Ultimately, why did you choose one workflow over the other?

    Read the article

  • Are there any subversion "dash board" web applications that can show me a list of recent commits from all my repositories?

    - by Joe
    I am looking for something like a subversion dashboard that at the very least can show me commits from across a group of repositories. Is there anything like this available? Since it could just as well be dead simple and I can't find anything immediately I am thinking if just scratching my own itch here, but I am hoping someone has wanted this before? Are there any subversion "dashboards" that an show me even a simple twitter-like list of commits from across my repositories?

    Read the article

  • git tagging comments - best practices

    - by Evan
    I've adopted a tagging system of x.x.x.x, and this works fine. However, you also need to leave a comment with your git tag. I've been using descriptions such as "fixes bug Y" or "feature X", but is this the best sort of comment to be leaving? Particularly, what if a tag encompasses several fixes, it seems not to make sense to have a very long tag comment. Does this mean that I should be creating a tag for every bug fix or feature, or should the tag comments be reflective of something else? I have a few ideas that may be good, but I'd love some advice from seasoned git tagging veterans :) For those who prefer specific examples: 1.0.0.0 - initial release 1.0.0.1 - bug fix for issue X 1.0.0.2 - (what if this is a bug fix for multiple issues, the comment would be too long, no?) Another example, in this example, the comments are more or less the same as the tags, it seems redundant. Is there something else we could be describing? https://github.com/osCommerce/oscommerce2/tags

    Read the article

  • Is there tool agnostic terminology for source control activities?

    - by C. Ross
    My team is entering into some discussions on source control (process and possibly tools) and we would like a tool agnostic terminology for the various activities. The environment does have multiple (old) VCS's, and multiple desired (new) VCS's. Is there a standard definition of activities, or at least some commonly accepted set? Example activities (in CVS terminology): Branch Check out Update Merge

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • Examples of continuous integration workflow using git

    - by Andrew Barinov
    Can anyone provide a rough outline of their git workflow that complies with continuous integration. E.g. How do you branch? Do you fast forward commits to the master branch? I am primarily working with Rails as well as client and server side Javascript. If anyone can recommend a solid CI technology that's compatible with those, that'd be great. I've looked into Jenkins but would like to check out other good alternatives. To put some context into this, I am planning on transitioning from working as a single developer into working as part of the team. I'd like to start standardizing my own personal workflow so that I can onboard new devs quickly.

    Read the article

  • How can I merge two SubVersion branches to one working copy without committing?

    - by Eric Belair
    My current SubVersion workflow is like so: The trunk is used to make small content changes and bug fixes to the main source code. Branches are used for adding/editing enhancements and projects. So, trunk changes are made, tested, committed and deployed pretty quickly. Whereas, enhancements and projects need additional user testing and approval. At time, I have two branches that need testing and approval at the same time. I don't want to merge to the trunk and commit until the changes are fully tested and approved. What I need to do is merge both branches to one working copy without any commits. I am using Tortoise SVN, and when I try to merge the second branch, I get an error message: Cannot merge into a working copy that has local modifications Is there a way that I can do this without committing either merge?

    Read the article

  • How should I manage "reverting" a branch done with bookmarks in mercurial?

    - by Earlz
    I have an open source project on bitbucket. Recently, I've been working on an experimental branch which I (for whatever reason) didn't make an actual branch for. Instead what I did was use bookmarks. So I made two bookmarks at the same revision test --the new code I worked on that should now be abandoned(due to an experiment failure) main -- the stable old code that works I worked in test. I also pushed from test to my server, which ended up switching the tip tag to the new unstable code, when I really would've rather it stayed at main. I "switched" back to the main bookmark by doing a hg update main and then committing an insignificant change. So, I pushed this with hg push -f and now my source control is "correct" on the server. I know that there should be a cleaner way to "switch" branches. What should I do in the future for this kind of operation?

    Read the article

  • Database source control

    - by Bojan Skrchevski
    Should database files(scripts etc.) be on source control? If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control?

    Read the article

  • TortoiseSVN post-commit.bat doesn't work [migrated]

    - by user565739
    I am using TortoiseSVN on Windows 7 x64. I tried to put a post-commit.bat in the hooks folder of a repository, but it doesn't work at all. So I tried to put a pre-commit.bat (the content is exact the same as post-commit.bat) in hooks, and it worked fine. This is very strange. The .bat file is very simple, I just tried with: @echo off setlocal set REPOS=%1 set TXN=%2 xcopy C:\a C:\b\ /S /F exit 0 Anyone makes post-commit work with TortoiseSVN?

    Read the article

  • Are there advantages to using a DVCS for a solo developer?

    - by SnOrfus
    Right now, I use visual svn on my server, and have ankhsvn/tortoise on my personal machine. It works fine enough, and I don't have to change, but if I can see some benefits of using a DVCS, then I might give it a go. However, if there's no point or difference using it without other people, then I won't bother. So again, I ask, are there any benefits to using a DVCS when you're the only developer?

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Managing Eclipse projects in source control

    - by Matt Phillips
    I've been using eclipse for a long time to do development. One of the problems I've come across when working on other people's projects is if they come from source control, some of the eclipse project files "default.properties" and other xml config files are missing. Its usually a big pain in the butt to get the project running in eclipse. I understand the reasoning to not have certain files tracked because they may be full of specific stuff to a certain eclipse install. How do all of you manage that?

    Read the article

  • When should I make the first commit to source control?

    - by Kendall Frey
    I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete' and primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what all could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?

    Read the article

  • How and where do you store your private work/sourcecode?

    - by Amir Rezaei
    I have worked as a developer for over 10 years now. During that time I have had my own small projects where I have developed tools, applications and games. I have not found any robust solution to store my work. It’s always fun to get back to your code and see how you did before and how you would do it now. It’s just work that is unfortunate to lose. There are SVN solution such as Google’s Project Hosting. However I’m not interested in sharing my code or making it open source. Currently I’m hosting my own SVN server. So here comes my question: How and where do you store your private work/sourcecode? Requirements: Sourcecode versioning Backup Prefers free Edit: Remote access Edit: I have used Dropbox + TrueCrypt + SVN. Unfortunately you are limited to 5gb.

    Read the article

  • Enable DreamScene in Any Version of Vista or Windows 7

    - by DigitalGeekery
    Windows DreamScene was a utility available for Vista Ultimate that allowed users to set video as desktop wallpaper. It was dropped in Windows 7, but we’ll take a look at how to play DreamScenes in all versions of Windows 7 or Vista. Downloading DreamScenes First, you’ll need to find some DreamScenes to download. We’ve found some nice ones at both DreamScene.org and DeviantArt. You can find those download links at the end of the article. They’ll come as compressed files, so you’ll need to extract them after downloading. Windows 7 DreamScene Activator If you are running Windows 7 you can use Windows 7 DreamScene Activator. This free portable utility enables DreamScene in both 32 & 64 bit versions of Windows 7. Users can then set either MPG or WMV files as desktop wallpaper. Download and extract the Windows 7 DreamScene Activator (link below). Once extracted, you’ll need to run the application as administrator. Right-click on the .exe and select Run as administrator. Click on Enable DreamScene. This will also restart Windows Explorer if it is open. To play your DreamScene, browse for the file in Windows Explorer, right-click the file and select Set as Desktop Background. Enjoy your new Windows 7 DreamScene.   Although it says it is for Windows 7 only, we were able to get it to work with no problems on Vista Home Premium x32 as well.   You can Pause the DreamScene at anytime by right-clicking on the desktop and selecting Pause DreamScene.   When you are ready for a change, click Disable DreamScene and switch back to your previous wallpaper. Using VLC Media Player Users of all versions of Windows 7 & Vista can enable a DreamScene using VLC. Recently, we showed you how to set a video as your desktop wallpaper in VLC.  Since DreamScenes are in MPEG or WMV format, we will use the same tactic to display them as desktop wallpaper. We’ll just need to make a few additional tweaks to the VLC settings. You’ll need to download and install VLC media player if you don’t already have it. You can find the download link below. Next, select Tools > Preferences from the Menu. Select the Video button on the left and then choose DirectX video output from the Output dropdown list. Next, select All under Show Settings at the lower left, then select the Video button on the left pane. Uncheck Show media title on video. This will prevent VLC from constantly showing the title of the video on the screen each time the video loops. Click Save and the restart VLC.   Now we will add the video to our playlist and set it to continuously loop. Select View > Playlist from the Menu. Select the Add file button from the bottom of the Playlist window and select Add file.   Browse for your file and click Open.   Click the Loop button at the bottom so the video plays in a continuous loop.   Now, we’re ready to play the video. After the video starts playing, select Video > DirectX Wallpaper from the Menu, then minimize VLC.   If you’re using Aero Themes, you may get a pop-up warning and Windows will switch automatically to a basic theme.   If looping one video gets to be a little repetitive, you can add multiple videos to your playlist in VLC and loop the entire playlist. Just make sure you toggle the Loop button on the playlist window to Loop All. Now you’ve got a nice DreamScene playing on your desktop. Another cool trick you can do with VLC is take snapshots of favorite movie scenes and set them as backgrounds. When you’re ready to go back to your old wallpaper, maximize VLC, select Video and click DirectX Wallpaper again to turn it off the video background. Occasionally we were left with a black screen and had to manually change our wallpaper back to normal even after turning off the DirectX Wallpaper. Note: Keep in mind that using the VLC method takes up a lot of resources so if you try to run it on older hardware, or say a netbook, you’re not going to get good results. We also tried to use the VLC method in XP, but couldn’t get it to work. If you have leave a comment and let us know. While the DreamScene feature never really caught on in Vista, we find them to be a cool way to pump a little life into your desktop on any version of Vista or Windows 7. Downloads DreamScenes from Dreamscene.org DreamScenes from DeviantArt Download VLC media player Windows 7 DreamScene Activator Similar Articles Productive Geek Tips Wait, How do I Turn on DreamScene Again?Enable Run Command on Windows 7 or Vista Start MenuEnable or Disable UAC From the Windows 7 / Vista Command LineUnderstanding Windows Vista Aero Glass RequirementsEnable Mapping to \HostnameC$ Share on Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >