Search Results

Search found 8190 results on 328 pages for 'switch'.

Page 176/328 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • rsync'd a folder, folder doesn't show up, but free disk space decreased

    - by Patrick
    I am currently trying to switch from mac to windows/ubuntu dual boot (on 2 seperate internal HDDs), but ran into some trouble restoring my documents. I am not sure all the information below is necessary, but if I knew how to solve it, I wouldn't ask it here. I backed up my mac before buying this laptop on an external HDD with Carbon Copy Cloner. I wanted to put these files on my user folder on my windows HDD, but I could not do that from inside windows (HFS+ format of mac), so I used rsync from inside Ubuntu to copy the documents from the ext hdd to the windows partition. It seemed like it went okay, but from inside windows (and later also Ubuntu) the folder didn't show up. My free HDD space, however, has reduced with about 200 GB (the size of the backup) when looking at the disk properties (from inside Windows and Ubuntu). rsync command I used: rsync -av /media/patrick/Toshiba\ 1.5T/Users/patrickvandenberg/ /media/patrick/Windows8_OS/Users/Patrick/MacBackup/ Folder does not exist: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ cd /media/patrick/Windows8_OS/Users/Patrick/MacBackup bash: cd: /media/patrick/Windows8_OS/Users/Patrick/MacBackup: No such file or directory Size of disk: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ du -hs /media/patrick/Windows8_OS/ 195G /media/patrick/Windows8_OS/ Size of disk according to Disk properties: http://i.stack.imgur.com/OteMX.png (not enough rep to insert the image)

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • DON'T MISS THE ORACLE LINUX GENERAL SESSION @ORACLE OPENWORLD

    - by Zeynep Koch
    We have had great sessions today at Openworld but tomorrow will be even better. The session that you should not miss is : Tuesday, Oct 2nd : General Session: Oracle Linux Strategy and Roadmap   10:15am, Moscone South #103   Wim Coekaerts, Sr.VP, Oracle Linux and Virtualization Engineering will talk about what Oracle Linux strategy and what is coming in the next 12 months. This is one session you should not miss and people are already registering. Stop by to hear Wim and ask questions about Linux development Top Technical Tips for Automatic and Secure Oracle Linux Deployments,  11:45am, Moscone South # 270 In this session, you will hear about deployment best practices and tips from Lenz Grimmer from Oracle and two Linux customers, Martin Breslin from SEI and Ed Bailey from Transunion talk about their experiences and insights Why Switch to Oracle Linux?, 3:30pm, Moscone South #270 In this session you will learn why Oracle Linux is best for your enterprise. There will be an Oracle speaker and Mike Radomski from SUNY talk about why they chose Oracle Linux. Please also visit the Oracle Linux Pavilion. If you stop by in one of our Partners booth you can be in the drawing for this beautiful, plush penguin. See you all tomorrow.

    Read the article

  • Handling multiple Scene in AndEngine

    - by Asad
    I am Developing a game in AndEngine Gles2. I have splash scene, loading scene, menu scene and Level1 scene. I am using a Screen Manager to manage all scenes through which i can easily switch between splash, loading and menu scene, the level1 scene is also loaded from menu perfectly but problem occur when i go back to menu scene on the completion of level, screen turned to black and nothing shown after that. I think the problem is with unloading the resources of Level1 because the switching between other scene is perfect. I can't give complete code, as it is to much lengthy. I am using bitmapTexture region, Sprites, bodies, physics Word, hud and fixtures etc. here is my unload method.. 1 more thing when i loaded the menu scene at the end of level 1 screen turn to black, but the music played and all logs are showed in logcat which i set in menu scene. unload(){ setChildrenIgnorUpdate(); clearChildScene(); clearEntityModifier(); clearTouchAreas(); clearUpdateHandler(); BitmapTextureManager.getInstance().destroyInstance(); destroyPhysics(); } Please Any help...

    Read the article

  • Why are some seasoned ASP.NET developers defecting to Ruby on Rails?

    - by Tony_Henrich
    Once a while I hear some known ASP.NET developer declaring that they quit developing in .NET and moving to Ruby using Ruby in Rails. The problem is they don't mention exactly the reasons. They use words like RoR is 'easier', 'better' & 'faster'. That really doesn't say much to me. Anyone care to do faithful comparison using code samples, case studies ..etc or from personal experience in using both? Try to convince me to throw away all my years of learning C#, the .NET Framework using a powerful IDE (Visual Studio). Does RoR save you hours a week in development time? What are the major pain points in .NET that compels one to move away from it? This question is NOT about a pure RoR vs ASP.NET (MVC) comparison. It's about the compelling technical reasons (getting bored does not count!) to switch over after using a platform for several years and start with a new language and platform. (prefer this to be a wiki)

    Read the article

  • How to fix no splash screen in Ubuntu after nvidia proprietary driver installation (also black borders)

    - by Fabio Trevisiol
    This is soultion how to fix no splash screen in Ubuntu after nvidia proprietary driver installation. It's no matter what Ubuntu version you use, it should work anyway. (TESTED ON 14.04) Open your terminal and type: sudo apt-get install v86d (TEST WITHOUT) Then: sudo gedit /etc/default/grub Find this line: #GRUB_GFXMODE=640x480 Add below (of course choose your resolution): GRUB_GFXMODE=1024x768x32 (TRY WITHOUT OR DIFFERENT BIT DEPTH) GRUB_GFXPAYLOAD_LINUX=1920x1080x32 (YOU CAN ALSO USE THE KEEP OPTION) Save file and type in terminal: echo FRAMEBUFFER=y | sudo tee /etc/initramfs-tools/conf.d/splash (ALLOWS TO AVOID THAT THE SPLASH SCREEN IS DISPLAYED FOR A FEW SECONDS) sudo update-initramfs -u sudo update-grub2 For all those who complain about the presence of black borders in "plymouth", try to make these changes before installing the nvidia driver or switch back from nvidia to nouveau and from nouveau to nvidia. Kernel update from the Software Updater? It happened to me; I don't know if it matters. I don't know for which of these reasons, but after a few reboots, the black borders are gone. UPDATE discovered the secret: during all these beautiful things, something strange happened. glxinfo | grep vendor server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: nouveau

    Read the article

  • Suspended Sentence is a Free Cross-Platform Point and Click Game

    - by Asian Angel
    Do you want a fun point and click game to play on your favorite operating system? Then get ready to play Suspended Sentence! In the game you are woken from cryogenic sleep to assist in repairing the ship you are traveling on. Can you successfully complete the repairs and get your prison sentence suspended in return? Note: Suspended Sentence is available for Linux, Windows, and Mac. Suspended Sentence Homepage [via OMG! Ubuntu!] Access the Walkthrough for Suspended Sentence Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • Bridging The Gap Between Developers And Testers With VS 2010

    - by Vincent Grondin
    On January 29th Etienne Tremblay and I presented infront of roughly 120 people in Ottawa a 7 hours "sketch" on how VS 2010 and TFS 2010 can help both devs and testers in their respective work.  The presentation focused on how a testers' work can positively influence a developers' work and vice versa.  The format was quite unusual as I said it's a "sketch" where Etienne and I "ignore" the audience and we do as if we were at work and the audience is sort of "spying" on us.  In all I'm quite pleased with the content we presented and the format sure was alot of fun to render and I think the audience liked it too...  The good news for you people reading this post is that it got RECORDED and it's now available for download in quick 25 to 35 minutes format on the dev teach web site:  http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx   There where 2 cameras, one filming us and one capturing the screen for our demos.  We switch from one to another in an intersting flow and Jean-René Roy made sure he kept all our goofs and didn't edit those funny "oups moments" where we screw-up in the scenario...  Mostly educative but hilarious at times !!! I encourage you all to download and watch the 13 episodes...  Follow a day at work for a tester and a developper using VS 2010 and TFS 2010 to improve their chemistry !  Thanks to Jean-René Roy for all the work he's put into this event and to Microsoft and Pyxis for sponsoring the event.

    Read the article

  • chromium-browser usus 99,99% IO disk

    - by lars
    My favorite browser: chromium is testing my patience. For some reason it sometimes uses 99,99% of I/O. (reading 2-3MB/s) Other processes (updatedb.mlocate, [kswapd0], clementine, compiz) show the same behavior. However this problem always starts and ends with chromium. To illustrate the impact on my system: when my disk starts to spin like crazy en the led burns continiously the system is so slow that it takes about two to five minuits to switch to tty6, log in and execute "killall chromiumbrowser && killall chromium" This is way faster than starting a new terminal in X, just starting a terminal seems to heavy for compiz under these circumstances. Waiting until its over takes more than 30 minuits, if it ends at all. The exact circumstances are difficult to replicate. Several tabs have to be open, usualy 8 or more. It seems that the chance to increases when more complex sites like gmail of plugins like flash are running. Opening several new tabs at omgubunt.co.uk has the best chance to replecate this isue. I have no idea where to start looking for a solution. Any help would be greatly apreciated ubuntu 12.10 | 2GB | 2x 1.66GHz Intel | 32bit | IBM Thinkpad R60e

    Read the article

  • How to become an expert web-developer?

    - by John Smith
    I am currently a Junior PHP developer and I really LOVE it, I love internet from first time I got into it, I always loved smartly-created websites, always was wondering how it all works, always admired websites with good design and rich functionality, and finally I am creating web-sites on my own and it feels really great. My goals are to become expert web-developer (aiming for creating websites for small and medium business, not enterprise-sized systems), to have a great full-time job, to do freelance and to create my own startup in future. General question: What do I do to be an expert, professional and demanded web-programmer? More concrete questions: 1). How do I choose languages and technologies needed? I know that every web-developer must know HTML+CSS+JS+AJAX+JQuery, I am doing some design aswell cause I like it and I need it for freelance also. But what about backend languages? Currently I picked PHP cause it's most demanded in my area and most of web uses it, but what would happen in future? Say, in 3 years, I am good at PHP and PHP frameworks by than, but what if some other languages get most popular? Do I switch to them? I know that good programmer is not about languages and frameworks but about ability to learn and to aim the goals, but still I think that learning frameworks for some language can take quite some time. Am I wrong? 2). In general, what are basic guidelines to be expert web-developer? What are most important things I should focus on? Thank you!

    Read the article

  • IOS OpenGl transparency performance issue

    - by user346443
    I have built a game in Unity that uses OpenGL ES 1.1 for IOS. I have a nice constant frame rate of 30 until i place a semi transparent texture over the top on my entire scene. I expect the drop in frames is due to the blending overhead with sorting the frame buffer. On 4s and 3gs the frames stay at 30 but on the iPhone 4 the frame rate drops to 15-20. Probably due to the extra pixels in the retina compared to the 3gs and smaller cpu/gpu compared to the 4s. I would like to know if there is anything i can do to try and increase the frame rate when a transparent texture is rendered on top of the entire scene. Please not the the transparent texture overlay is a core part of the game and i can't disable anything else in the scene to speed things up. If its guaranteed to make a difference I guess I can switch to OpenGl ES 2.0 and write the shaders but i would prefer not to as i need to target older devices. I should add that the depth buffer is disabled and I'm blending using SrcAlpha One. Any advice would be highly appreciated. Cheers

    Read the article

  • Screen becomes black after pressing dash or alt-tab

    - by cegerxwin
    I did an upgrade from 11.04 to 11.10. Unity 3d becomes a black screen after pressing the dash-button or after pressing alt-tab to switch between open windows. I can see the panel on the top(lock,sound,..) and the panel on the left (launcher) but the rest is black. It looks like a maximised black window. The open Windows are active but I cant see them. I logout by pressing logout in the right top corner and pressing enter (because logout is default focused on the dialogue screen) and leave unity3d. Unity3d worked with 11.04 very good. If I press the dash button the dash looks like an 16-Bit or 8-Bit window and buttons for maximise, minimise and close are displayed and looks inverted. I have rebooted my notebook just now and log in to Unity 3D and tested some features of Unity and everything works well. The black thing is only a layer. I can use my desktop but cant see anything because of the layer, but everything works. It seems so, that a layer appear when pressing dash or alt-tab and does not disappear when close dash or choose a running app with alt-tab. you will see the necessary info related video problems: Unity support: /usr/lib/nux/unity_support_test -p OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RC410 OpenGL version string: 2.1 Mesa 7.11-devel Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes xorg glxinfo lspci -nn | grep VGA 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RC410 [Radeon Xpress 200M] [1002:5a62]

    Read the article

  • Problems with both LightDM and GDM using DisplayLink USB monitor

    - by Austin
    When I use LightDM, it will auto-login to desktop just fine. The only problem is Compiz doesn't work, and menus don't work. I can't right-click the desktop, and I can't select program menus in the top bar (I.e clicking "File" does nothing). When I use GDM, I only get a blank blue screen and the mouse cursor. I can't Ctrl+Alt+Backspace to restart, but I can Ctrl+Alt+F1 and Ctrl+Alt+F7 to switch modes. I don't think it's auto-logging me in, but I'm not sure. It plays the login screen noise. Will update with more information when I get home! EDIT: Okay, so I did a fresh install, just to ensure I hadn't borked something playing in the console. I reconfigured my setup as I did before, with the same results. Here's what I followed. The only difference is that instead of setting "vga=normal nomodeset" I set "GRUB_GFXPAYLOAD_LINUX = text". Also I only have the DisplayLink monitor configured in my xorg.conf file. At this point I'm using the open radeon driver, although I used the proprietary ati driver before. I'm not sure if I'm having a problem with: - X configuration - Graphics driver - DisplayLink driver - Unity - LightDM - Compiz - Or something else The resolution of the monitor is 800x480, 16bit. I tried setting a larger virtual resolution of 1200x720 (because the real resolution is lower than the recommended resolution), but it causes Ubuntu to boot into low graphics mode. When I get home I'm going to install the fglrx driver and see if it enables virtual resolutions, which may further enable my window manager to function properly.

    Read the article

  • 5 Design Tricks Facebook Uses To Affect Your Privacy Decisions

    - by Jason Fitzpatrick
    If you feel like Facebook increasingly has fewer and fewer options to reject applications and organization access to your private information, you’re not imagining it. Here are five ways Facebook’s design choices in the App Center have minimized your choices over time. Over at TechCrunch they have a guest post by Avi Charkham highlighting five ways recent changes to the Facebook App Center put privacy settings on the back burner. In regard to the comparison seen in the image above, for example, he writes: #1: The Single Button Trick In the old design Facebook used two buttons – “Allow” and “Don’t Allow” – which automatically led you to make a decision. In the new App Center Facebook chose to use a single button. No confirmation, no decisions to make. One click and, boom, your done! Your information was passed on to the app developers and you never even notice it. Hit up the link below to check out the other four redesign choices that minimize the information about privacy and data usage you see and maximize the click-through and acceptance rate for apps. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Web developing- Strange happenings

    - by Jason
    As I'm teaching myself PHP and MySQL during break, I'm experimenting coding in a Ubuntu virtual machine where Apache, MySQL and PHP have been installed and configured to a shared folder. I'm not a big fan of Kompozer because the source code layout is a PIA, so I've started checking out gPHPEdit. However, since using it, I've come across two issues: when I edit the .html and .php files, sometimes the file extension will change to .html~ and .php~, becoming invisible to the browser. The only solution is to switch to Windows, right click and rename the file extension. In Ubuntu Firefox, when I click on my prpject's Submit button for in a practice form, a dialog box pops up asking what Firefox should do with the .php file, rather than simply displaying it in the browser. When I do this in Windows Chrome & Firefox, it goes right to the response page. I'm not sure if this behavior is limited to gPHPEdit/Kompozer, but I've never noticed this happening in Dreamweaver. Any solutions? EDIT The behavior in Point 1 occurs both when Dreamweaver is open in Windows accessing the same files and when it is not. I changed the extension filename of welcome.php, added a comment in gPHPEdit, and the file changed to welcome.php~ upon saving.

    Read the article

  • Jump and run HTML5 Game Framework

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: we have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practive? Having multiple canvas to draw? Each gameloop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it everytime and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive... Other question: Do you have any advice how we could dive into implementing an own framework? Most stuff we find online relies on existing frameworks or they just implement their game without building a framework.

    Read the article

  • Magento - How to manage multiple base currencies and multiple payment gateways?

    - by Diego
    I have two requirements to satisfy, I hope someone with more experience can help me sorting them out. Multiple Base Currencies My client wants to allow visitors to place orders in whatever currency they prefer, choosing from the ones he’ll configure. Magento only supports one Base Currency, and this is, obviously, not what I need. I checked the solution involving multiple websites, but I need a customer to be registered once and stay on the same website, not to switch from one to the other and have to register/log in on each. Manage multiple Payment Gateways per currency and per payment method This is another crucial requirement, and it’s tied to the first one. My client wants to “route” payments in different currencies to different accounts. He’ll thus have one for Euro, one for USD and one for GBP. Whenever a customer pays with one of these currencies, the payment gateway has to be chosen accordingly. Additionally, the gateway should be different depending on other rules. For example, if customer pays with a Debit Card, my client will have a payment gateway configured especially for it. If customer pays with MasterCard, the gateway will be different, and so on. The complication, in this case, arises from the fact that my client uses Realex Payments and, although it would be possible for him to open multiple accounts, the Realex module expects one single gateway. In a normal scenario, we would need up to six instead: Payment with Debit Card in Euro Payment with Credit Card in Euro Payment with Debit Card in US Dollars Payment with Credit Card in US Dollars Payment with Debit Card in GB Pounds Payment with Credit Card in GB Pounds This, of course, if he doesn’t decide to accept other payment methods, such as bank transfer, which would add one more gateway per currency. Is there a way to achieve the above in Magento? I never had such complicated requirements before, and I’m a bit lost. Thanks in advance for the help.

    Read the article

  • wubi dual-boot installation of ubuntu 12.04 on Windows 7 fails to boot

    - by Andrew
    I am trying to use the wubi installation process to create a Ubuntu 12.04 / Windows 7 dual boot setup on my Windows 7 machine (Dell Inspiron 17R). The installation initially works fine, and I am able to load Ubuntu several times after selecting it from the boot menu. However, when I boot into Windows 7 it seems to corrupt the Ubuntu boot process, because after running Windows 7, Ubuntu won't boot on the machine. It is still listed as an option in the boot menu, but when it is selected, the machine does one of the following: -hangs at the load-screen and says that Ubuntu is preparing to run for the first time (although it isn't the first time the OS has been loaded) -hangs with a black screen and does nothing I have uninstalled Ubuntu and then reinstalled it (using wubi) three times. Each time Ubuntu initially boots okay (including rebooting the laptop into Ubuntu several times.) However, whenever I switch over and boot into Windows 7 it breaks the Ubuntu installation. Windows 7 continues to boot and work fine without issues. I have successfully installed Ubuntu using wubi onto a different Windows 7 machine before without problems...it seems that there is something different about this laptop configuration. I am not sure how to debug the issue. I see no error messages during the Ubuntu boot process when it hangs and am not sure how to debug this.

    Read the article

  • How do I get my Lexmark x4650 printer working?

    - by Fallen Dohingy
    I think that my printer stopped working with the switch to gnome 3 or unity. Yes I have tried 32 and 64 bit os's. Here is the driver In order to actually install the driver, you need to extract it and then open up terminal and type sudo and then a space. Then drag the script into the terminal window. Here is what it said in the diver install window: Extracting file: printdriver.te Extracting file: lexmark-08z-series-driver-1.0-1.i386.deb Extracting file: launcher.c Extracting file: launcherfallendohingy@Ubuntu-Inspiron-15R:~$ sudo '/home/fallendohingy/Downloads/lexmark-08z-series-driver-1.0-1.i386.deb.sh' [sudo] password for fallendohingy: Verifying archive integrity... All good. Uncompressing nixstaller.............................................................. Collecting info for this system... Operating system: linux CPU Arch: x86_64 Warning: No installer for "x86_64" found, defaulting to x86... TRACKING IDENT = 170209 cpu speed = 2394 MHz ram size = 3762.69921875 MB hd avail = 74348 MB (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory /usr/lib/gio/modules/libgvfsdbus.so: wrong ELF class: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so Extracting file: lsbrowser Extracting file: lsusbdevice Using dpkg installation ============================= Execute: dpkg -i --force-architecture lexmark-08z-series-driver-1.0-1.i386.deb > /tmp/selfgz17540/pkg/files/dpkg_msgs ============================= ============================= Execute: rm lexmark-08z-series-driver-1.0-1.i386.deb ============================= ============================= Execute: /sbin/udevadm control --reload-rules ============================= Successfully installed the .deb Lexmark drivers.

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • Level editor event system, how to translate event to game action

    - by Martino Wullems
    Hello, I've been busy trying to create a level editor for a tile based game i'm working on. It's all going pretty fine, the only thing i'm having trouble with is creating a simple event system. Let's say the player steps on a particulair tile that had the action "teleport" assigned to it in the editor. The teleport string is saved in the tile object as a variable. When creating the tilegrid an actionmanager class scans the action variable and assigns actions to the variable. public static class ActionManager { public static function ParseTileAction(tile:Tile) { switch(tile.action) { case "TELEPORT": //assign action here break; } } } Now this is an collision event, so I guess I should also provide an object to colide with the tile. But what if it would have to count for collision with all objects in the world? Also, checking for collisions in the actionmanager class doesn't seem very efficient. Am I even on the right track here? I'm new to game design so I could be completly off track. Any tips on how handeling and creating events using an editor is usually done would be great. The main problem i'm having is the Thanks in advance.

    Read the article

  • Do we really need a thousand Linux distributions?

    - by nebukadnezzar
    Pointed from an answer to a (possibly related) question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers, buttons, the kind of stuff most people probably wouldn't see as a reason to fork a Linux distribution. Of course, someone will always say "Opensource is also about the freedom of choice", and while I wholeheartedly agree, I do not believe that this is a valid reason to fork an already perfectly working Distribution into a new one, which might possibly result in less security/stability due to smaller group of developers. There's another problem: Those, who want to switch to Linux, are confronted with a neverending list of Linux distributions, and wonder rightfully which they're supposed to chose (infact, I was facing that problem before I've discovered Ubuntu). There might be (very few) valid reasons to fork a distribution: Specializing on a particular topic (FOSS Only, work-related topic (i.e., for a Hospital), etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, propietary technology, and such But even with these points in mind, it would still seem easier to create a subdistribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. So, why exactly do we need so many distributions?

    Read the article

  • Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome

    - by ETC
    If you rock a bunch of tabs and sometimes need a little visual reminder to recall where you left a tab you’re looking for, Tabs Visual Manager thumbnails all your tabs for easy visual switching. Install Tabs Visual Manager, restart Chrome, and anytime you need to find a tab you can click on the Tabs Visual Manager icon in the toolbar. By default it opens a new tab with all your tab thumbnails, we found it was more convenient to switch it to pop-up mode (wherein it pops up a smaller menu from the icon itself instead of a whole new tab). Tabs Visual Manager is a free extension and works wherever Chrome does. Hit up the link below to read more and grab a copy. Tabs Visual Manager [Google Chrome Extensions] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >