Search Results

Search found 16435 results on 658 pages for 'portable applications'.

Page 431/658 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Windows Phone 7 developer resources

    - by Daniel Moth
    Developers of Windows Mobile 6.x (and indeed Windows CE) applications still use the rich .NET Compact Framework 3.5 with Visual Studio 2008 for development. That is still a great platform and the Mobile Development Handbook is still a useful resource (if I may say so myself :-). The release of Windows Phone 7, changes the programming paradigm. The programming model has NETCF in its guts, but the developer uses the Silverlight or XNA APIs (and they can call from one into the other). I thought I'd gather here (for your reference and mine) the top 10 resources for getting started. Windows Phone Developer Home - get the official word and latest announcements. Windows Phone Developer Tools RTW - download the free developer tools (on my machine the installation took 30 minutes, over my existing vanilla Visual Studio 2010 install). Windows Phone 7 Jump Start video training - watch the 12 sessions by Wigley/Miles. Windows Phone 7 Developer Training Kit - work through the labs. Windows Phone RSS tag - channel9 has tons more WP7 videos, stay tuned. Windows Phone 7 in 7 Minutes - watch 20 7-minute videos. Programming Windows Phone 7 - read 11 free chapters from Petzold's eBook. The Windows Phone Developer Blog - subscribe to the official blog. Getting Started with Windows Phone Development - explore all links from the MSDN Library root page.            Silverlight for Windows Phone – another root MSDN library page. If after all that you get your hands dirty and still can't find the answer ask questions at the WP7 development MSDN Forum.   On a personal note, I was pleased to see that the Parallel Stacks debugger window works fine with the WP7 project ;-) Comments about this post welcome at the original blog.

    Read the article

  • Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it)

    - by ScottGu
    Last week we published the RC2 build of ASP.NET MVC 3.  I blogged a bunch of details about it here. One of the reasons we publish release candidates is to help find those last “hard to find” bugs. So far we haven’t seen many issues reported with the RC2 release (which is good) - although we have seen a few reports of a metadata caching bug that manifests itself in at least two scenarios: Nullable parameters in action methods have problems: When you have a controller action method with a nullable parameter (like int? – or a complex type that has a nullable sub-property), the nullable parameter might always end up being null - even when the request contains a valid value for the parameter. [AllowHtml] doesn’t allow HTML in model binding: When you decorate a model property with an [AllowHtml] attribute (to turn off HTML injection protection), the model binding still fails when HTML content is posted to it. Both of these issues are caused by an over-eager caching optimization we introduced very late in the RC2 milestone.  This issue will be fixed for the final ASP.NET MVC 3 release.  Below is a workaround step you can implement to fix it today. Workaround You Can Use Today You can fix the above issues with the current ASP.NT MVC 3 RC2 release by adding one line of code to the Application_Start() event handler within the Global.asax class of your application: The above code sets the ModelMetaDataProviders.Current property to use the DataAnnotationsModelMetadataProvider.  This causes ASP.NET MVC 3 to use a meta-data provider implementation that doesn’t have the more aggressive caching logic we introduced late in the RC2 release, and prevents the caching issues that cause the above issues to occur.  You don’t need to change any other code within your application.  Once you make this change the above issues are fixed.  You won’t need to have this line of code within your applications once the final ASP.NET MVC 3 release ships (although keeping it in also won’t cause any problems). Hope this helps – and please keep any reports of issues coming our way, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CRM@Oracle Series: Sales Pipeline Visibility

    - by tony.berk
    Can you see it? Should you see it all? Who else should see it? Can their manager see it? What is it? Today's slidecast discusses the popular topic of sales pipeline visibility within a CRM application and how Oracle implemented visibility in our global implementation of Siebel CRM. This post is next in the CRM@Oracle Series, which discusses Oracle's internal use of Oracle CRM products such as Siebel CRM and Oracle CRM On Demand. Oracle's requirements include a variety of different organizations, roles and responsibilities. Oracle's Applications IT CRM Systems team, responsible for deploying Siebel CRM within Oracle, implemented a number of creative solutions to address the requirements, and they are shared in the slidecast. CRM@Oracle - Sales Pipeline Visibility Click here to learn more about Oracle CRM products and here to learn about other customers using Oracle CRM products. We want to hear from you! If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, let us know and we'll get it on our list. In the meantime, enjoy today's update on the CRM@Oracle series.

    Read the article

  • Text Expander broken upon Snow Leopard Upgrade

    - by bLee
    I've been using Text Expander for years. However, I found out today it doesn't work on my newly upgraded (to Snow Leopard) Macbook Pro anymore. I know that there are a lot of applications that are not compatible with Snow Leopard, and developers around the world are working on them to work again on our beautiful macs. My questions are: 1. Is TextExpander supposed to NOT work on Snow Leopard? 2. What is a good substitution to replace TextExpander? or any updates from TextExpander developers?

    Read the article

  • Terminal proxy or screen without terminal emulation

    - by ZyX
    How can I make terminal applications immune to terminal emulator close, but still able to use all virtual terminal features? I see this must be something like screen, but without VT100 terminal emulation, something which will just apply whatever application does with "terminal proxy"'s terminal (like outputting something to stdout/stderr or using stty to set terminal options) to the terminal this proxy runs in. // I know about screen and altscreen on, but it makes either this (screen with TERM=screen): or this (screen with TERM=rxvt-unicode): while I want this (rxvt-unicode without screen): I have figured out that everything looks fine if I compile rxvt-unicode with USE=-xterm-color (in fact vim looks like on the second picture even without screen if I add this USE flag) and set TERM=screen-256color, but I do not like this workaround because it actually changes colors and I can't be sure that it will always change them only this way:

    Read the article

  • Restore deleted default folders

    - by Helena T.
    I was tinkering with my new laptop and, on purpose, deleted those default folders in my "home" directory: "My Music", "Links", "Favorites". This, because, i decided i wanted all my data on another partition, leaving C: only for applications and configs files. But now, some of the explorer functionalities are gone: i cannot use the Favorites tree in the left side pane, also discovered that "My Documents" stores some PowerShell config file. I feel like i misunderstood this folders' purpose and by deleting them, provoked some Explorer instability. Is there any way to restore them? I do not seem to find it. Thank you for taking the time to read this.

    Read the article

  • PHP FastCGI SAPI: Reloading PHP Configuration

    - by Emre
    I am using PHP FastCGI SAPI on my web hosting environment to run PHP applications. To spawn FCGI processes I use spawn-fcgi helper program. My problem is whenever I make a change to php.ini file, I have to kill and respawn each FastCGI server for the new configuration to take effect. Is there a way to reload PHP configuration(ie. php.ini directives) without respawning each FastCGI server? I try sending hangup signal (ie. kill -HUP PHPCGIPID) to the servers but this will result in termination of the servers.

    Read the article

  • How can I open urls on my host machine with VMWARE?

    - by Yanamon
    I am running a Windows 7 vm with WMware player from Fedora. I have VMWare tools installed successfully and I have successfully some of it's features like Unity mode so it seems to be installed correctly. That being said I still cannot get urls to open up in my host machine's browsers, these are the steps I have taken: Within the vm I set "Default Host Application" to be the application to open urls. Within my host machine I have set Chrome to be my preferred application for opening urls. Enabled Shared Folders in the vm (Not sure if that really helped anything but I saw it suggested on a forum post) After doing that when I click on a link I get the following error message: Default host Application: Make sure the virtual machine's configuration allows the guest to open host applications. I cannot find any option like that in my vm's configuration so I am not sure what the error message is referring to.

    Read the article

  • Memory Pressure Protection Feature for TCP Stack - Provided by Microsoft Security Update KB967723

    - by Angry_IT_Guru
    We've been having a lot of funky issues with some of our web based applications that allow clients to submit lot of image files to our servers. Lots of ports are used in the process. http://www.microsoft.com/technet/security/bulletin/MS09-048.mspx - released in Sept-2009. support.microsoft.com/kb/974288 - Memory Pressure Protection description. Evidently, after applying KB967723, our clients receive funky error messages as if connections cannot be made to the server or connections have been closed. There doesn't appear to be a pattern and sometimes it works and other times is doesn't. Typically we've noticed it when server is under load. I'm curious what others think about this MPP and any issues that you may have experienced from it. I understand its purpose, but I think it may have broken a lot of apps in the process. It doesn't look like Microsoft made this "feature" public to everyone.

    Read the article

  • Integrating Twitter Into An ASP.NET Website

    Twitter is a popular social networking web service for writing and sharing short messages. These tidy text messages are referred to as tweets and are limited to 140 characters. Users can leave tweets and follow other users directly from Twitter's website or by using the Twitter API. Twitter's API makes it possible to integrate Twitter with external applications. For example, you can use the Twitter API to display your latest tweets on your blog. A mom and pop online store could integrate Twitter such that a new tweet was added each time a customer completed an order. And ELMAH, a popular open-source error logging library, can be configured to send error notifications to Twitter. Twitter's API is implemented over HTTP using the design principles of Representational State Transfer (REST). In a nutshell, inter-operating with the Twitter API involves a client - your application - sending an XML-formatted message over HTTP to the server - Twitter's website. The server responds with an XML-formatted message that contains status information and data. While you can certainly interface with this API by writing your own code to communicate with the Twitter API over HTTP along with the code that creates and parses the XML payloads exchanged between the client and server, such work is unnecessary since there are many community-created Twitter API libraries for a variety of programming frameworks. This article shows how to integrate Twitter with an ASP.NET website using the Twitterizer library, which is a free, open-source .NET library for working with the Twitter API. Specifically, this article shows how to retrieve your latest tweets and how to post a tweet using Twitterizer. Read on to learn more! Read More >

    Read the article

  • Oracle UPK Content Development Tool Settings

    - by [email protected]
    Oracle UPK Content Development tool settings: Before developing UPK content, your UPK Developer needs to be configured with certain standard settings to ensure the content will have a uniform look. To set the options: 1. Open the UPK Developer. 2. Click the Tools menu. 3. Click Options. After you configure the UPK Options, you can share these preferences with other content developers by exporting them to an .ops file. This is particularly useful in workgroup environments where multiple authors are working on the same content that requires consistent output regardless of who authored the content. (To learn more about Exporting/Importing Content Defaults refer to the Content Development.pdf guide that is delivered with the UPK Developer.) Here is a list of a few UPK Developer tool settings that Oracle UPK Content Developers use to develop UPK pre-built content: Screen resolution is set to 1024 x 768. See It mode frame delay is set to 5 seconds. Know It Required % is set to 70% and all three levels of remediation are selected. We opt to automatically record keyboard shortcuts. We use the default settings for the Bubble icon and Pointer position. Bubble color is yellow (Red = 255, Green = 255, Blue = 128). Bubble text is Verdana, Regular, 9 pt. ***Intro and end frame settings match the bubble settings Note: The Content Defaults String Input Settings will change based on which application (interface) you are recording against. For example here is a list of settings for different Oracle applications: • Agile - Microsoft Sans Serif, Regular, 8 • EBS - Microsoft Sans Serif, Regular, 10 • Hyperion - Microsoft Sans Serif, Regular, 8 • JDE E1 - Arial, Regular, 10 • PeopleSoft - Arial, Regular, 9 • Siebel - Arial, Regular, 8 Remember, it is recommended that you set the content defaults before you add documents and record content. When the content defaults are changed, existing documents are not affected and continue to use the defaults that were in effect when those documents were created. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Oracle's Primavera P6 Analytics Now Available!

    - by mark.kromer
    Oracle's Primavera product team has announced this week that general availability of our first Oracle BI (OBI) based analytical product with pre-built business intelligence dashboards, reports and KPIs built in. P6 Analytics uses OBI's drill-down capabilities, summarizations, hierarchies and other BI features to provide knowledge to your business users to make the best decisions on portfolios, projects, schedules & resources with deep insights. Without needing to launch into the P6 tool, your executives, PMO, project sponsors, etc. can view up to date project performance information as well as historic trends of project performance. Using web-based portal technology, P6 Analytics makes it easy to manage by exception and then drill down to quickly identify root cause analysis of problem projects. At the same time, a brand new version of the P6 Reporting Database R2 was just announded and is also now available. This updated reporting database provides you with 4 star schemas with spread data and includes P6 activity, project and resource codes. You can use the data warehouse and ETL functions of the P6 Reporting Database R2 with your own reporting tools or build dashboards that utilize the hierarchies & drill down to the day-level on scheduled activities using Busines Objects, Cognos, Microsoft, etc. Both of these products can be downloaded from E-Delivery under the Primavera applications section in the P6 EPPM v7.0 media pack. I put some examples below of the resource utilization, earned value, landing page and portfolio analysis dashboards that come out of the box with P6 Analytics to give you these deep insights into your projects & portfolios on day 1 of using the tool. Please send an email to Karl or me if you have any questions or would like more information. Oracle Technology Network and the Oracle.com marketing sites are currently being refreshed with further details of these exciting new releases of the Primavera BI and data warehouse products. Lastly, scroll below for some screenshots of the new P6 Analytics R1 product using OBIEE! Thanks, Mark Kromer

    Read the article

  • Event 4098, 0x80070533 Logon failure: account currently disabled?

    - by Windos
    Having started to upgrade our PCs to Windows 7 we have noticed that we are getting group policy warnings in Event Viewer such as: "The user 'Word.qat' preference item in the 'a_Office2007_Users {A084A37B-6D4C-41C0-8AF7-B891B87FC53B}' Group Policy object did not apply because it failed with error code '0x80070533 Logon failure: account currently disabled.' This error was suppressed." 15 of these warnings appear every two hours on every Windows 7 PC, most of which are to do with core office applications and two are for plug-ins to out document management system. These warnings aren't afecting the users, but it would be nice to track down the source of them before we rollout Win7 to the rest of the Organisation. Any ideas as to where the login issue could be comming from (All users are connecting to the domain and proxy, etc fine)?

    Read the article

  • Pro BizTalk 2009

    - by Sean Feldman
    I have finished reading Pro BizTalk 2009 book from APress. This is a great book  if you’ve never dealt with BizTalk in the past and want to have a quick “on-ramp”. Although the book is very concerned about right way of building traditional BizTalk applications, it also dedicates a chapter to ESB Toolkit and does a good job in analyzing it. The fact that authors were concerned with subject such as coupling, hard-coding, automation, etc. makes it very interesting. One warning, if you are expecting to have a book that will guide you how to apply step by step examples, forget it. Nor this book does it, neither it’s possible due to multiple erratas found in it. I really was disappointed by the number of typos, inaccuracies, and technical mistakes. These kind of things turn readers away, especially when they had no experience with BT in the past. But this is the only bad thing about it. As for the rest – great content. Next week I am taking a deep dive course on BizTalk 2009. I really feel that this book has helped me a lot to get my feet wet. Next target will be ESB Tookit. To get to that, I will use what the book authors wrote “you have to understand how BizTalk as an engine works”.

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Run WordPress & Other Web Apps with Windows Web Platform

    - by Matthew Guay
    Would you like to run WordPress or other web apps on your PC so you can easily test and design websites?  Here we’ll look at how you can get the latest web apps on your computer in only a few quick steps. Many web apps today, such as WordPress, MediaWiki, and more, are open source and can be run for free from any computer with even a simple local web server.  They are often very difficult to install on your computer, since they require a number of dependencies such as PHP and MySQL.  Microsoft has worked to make this easier, releasing the Windows Web Platform Installer.  This lets you install many popular web apps and free tools in Windows with only a few clicks. Here we’re going to look at how to install WordPress and the free Visual Web Developer 2010 Express to edit web code with the Web Platform Installer.  But, if you’d rather install a different web app or tool, feel free to choose those as the installations are generally similar. Getting Started Head over to Microsoft’s Web development site and download the Web Platform Installer (link below).  This will download very quick, as it is just a small loader.  When you run this loader, it will download the Web Platform Installer files.  The Web Platform Installer works on XP, Vista, and Windows 7, as well as the related versions of Windows Server. After a couple moments, the Web Platform Installer will open and load information about the latest web offerings.    Now you can choose what you want to install.  You can quickly select the recommended products for several categories such as Web Server, Database, and more. Alternately, click Customize under the category and select exactly what you want to install.  Note that items already installed on your computer will be grayed out. We wanted to install Visual Web Developer 2010 Express, so select Customize under Tools, and select Visual Web Developer 2010 Express. Or, for more preset choices, select Options on the bottom of the window. You can choose to add Multimedia, Developer, and Enterprise tools to the lists, or add a new preset list from a feed. Choose Specific Web apps to Install We wanted to install WordPress, so instead of choosing a preset, select the Web Applications tab on the left.  Now you can choose from a variety of apps based on category, or you can view them all together in an A to Z, Most Popular, or Highest Rating list. Click the checkbox beside the app you want to install to select it, or click the “i” for more information. Here’s the More Information pane for WordPress.  If you’re ready to install it, click the checkbox. Now you can go back and add more web apps or tools to the install list if you like.  The Web Platform Installer will automatically find and select prerequisite apps such as MySQL, so you won’t need to worry about finding them. Once you’ve selected everything you want to install, click the Install button on the bottom of the window. The Web Platform Installer will now show you everything that’s selected, including components that it automatically selected.  Notice we only chose to install WordPress and Visual Web Developer 2010 Express, but it also has selected MySQL and PHP automatically.  Click I Accept to proceed. Enter an administrator password for MySQL before the setup begins. Now the Web Platform Installer will take over, automatically downloading, installing, and configuring all of your web apps.  It will also activate optional Windows components that may be needed on your computer.  This may take several minutes, depending on the components you selected and your internet speed.   Setting up Your Test Site Once the installation is finished, you’ll be asked to enter some information about your site.  You can simply accept the defaults or enter your own choices, and then click Continue. Now you’ll need to enter some information for your web apps.  When installing WordPress, you’ll need to choose a database and enter administrative usernames and passwords.  You may also be asked to enter extra information for additional security, but for a local-only test site this isn’t necessary.  Click Continue when you’re finished. You’ll need to wait a few more moments as it complete the setup of your web apps.  The good thing is, once it’s finished, they’ll be ready to go with only minimal configuration. And you’re finished!  The installer will let you know everything it installed, and if there were any problems.  In our test, Visual Web Developer 2010 Express failed to install successfully.  Often the problems may be with the download, so click Finish and then reselect the apps that didn’t install and run the installer again. Now you’re ready to run WordPress from your PC.  Click the Launch WordPress link or enter http://localhost:80/wordpress in your browser to get started. You’ll only have a little more setup to do on WordPress to get it running.  Once you’ve opened your WordPress page in your browser, enter a name for your blog and your email address, and click Install WordPress.   After a few seconds, you should see a Success! page with your username and a temporary password.  Copy the password, and then click Log In. Enter admin as the Username and paste the random generated password, and click Log In. WordPress will remind you to change the default password.  Click the Yes, Take me to my profile page link to do this. Enter something easier for you to remember, and click Update Profile. Now you’re ready to enjoy your new WordPress install on Windows.  You can add plugins and themes, and everything else you’d do with a normal WordPress site.  Here’s the dashboard running from localhost. And here’s the default blog running. Setting up Visual Web Developer 2010 Express As mentioned before, Visual Web Developer 2010 Express didn’t install correctly on our first try, but the second time it installed seamlessly.  Once it’s installed, launch it from your start menu as normal.  It may take a few minutes to load on the first run as it is finishing up setup. You may notice that the splash screen displayed while the program is loading says For Evaluation Purposes Only.  This is because you still need to register the program. You have 30 days to register the program, but let’s go ahead and do it to get this step out of the way.  Click Help in the menu bar, and select Register Product. Click Obtain a registration key online in the popup window. You’ll need to sign in with your Windows Live ID, and then fill out a quick form. When you’re done, copy the registration key displayed and paste it into the registration dialog in Visual Web Developer.   Now you’ve got a registered, free web development program with full standards compliance and IntelliSense to help you work smarter and faster.  And it works great with your local web apps, so you can create, tweak, and then deploy, all from your desktop with this simple installer! Install More Apps You can always run the Web Platform Installer again in the future and add more apps if you’d like.  The install adds a link to the Installer in the Start menu; just run it and repeat the steps above with your new selections. Also, from the installer, you can cleanup the setup files downloaded during the installation if you want.  Click the Options link in the bottom of the window, and then scroll down and select Delete installer cache folder. Uninstalling the apps is not as easy, unfortunately.  If you wish to uninstall the Web Platform Installer and everything you installed with it, you’ll need to uninstall each item individually.  One easy way to see what was all installed together is to sort the entries in Uninstall Programs by date.  In our case, we also installed some other applications on the same day, but it’s easier to see what was installed together. Or if you are not a fan of using Programs and Features to uninstall them, try out a program like Revo Uninstaller Pro. Conclusion Whether you’re a full-time web developer or just enjoy testing out the latest web apps, the Web Platform Installer makes it quick and easy to get your computer loaded up with the latest bits.  In fact, it’s easier to install these tools with all their dependencies than it is to install many standard boxed programs. If you’d like to take your web server anywhere you go and not have it confined to your desktop, then check out our article on how to Turn Your Flashdrive into a Portable Webserver. Link Download the Microsoft Web Platform Installer Similar Articles Productive Geek Tips Linux QuickTip: Downloading and Un-tarring in One StepQuick Tip: Set a Future Date for a Post in WordPressHow-To Geek SoftwareAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek Software: WordPress Comment Moderation Notifier TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • Friday Fun: Super Mario Bros. Crossover

    - by Mysticgeek
    Friday is finally here and it’s time to waste the afternoon on company time. Today we take a look at a super cool Classic NES Mashup called Super Mario Bros. Crossover. The game is Super Mario Bros. the way you remember it. However, the cool thing is you can switch between different classic NES game characters and use their moves and attacks during game play. Characters like Link, Mega Man, Samus…and more. When you are a different game character you’re shown tips on how to use their moves in the game.   Playing as Link… Between each world you can select a different character which is pretty neat. If you want to play this classic the way you remember it, you can be Mario too. This can be played using your keyboard, but it also supports using a controller, which you can find the instructions for at the link below.   You probably don’t want to bring a controller to work…but it’s cool they give the option. Make sure to turn the volume down on your computer so your boss is none the wiser, and believes your working hard. Play Super Mario Bros. Crossover How To Play Super Mario Bros. Crossover with a Gamepad Similar Articles Productive Geek Tips Friday Fun: Retro Nintendo WallpapersFriday Fun: Get Your Mario OnFriday Fun: Racing Fun with SuperTuxKart RacerHow to Install Windows Applications on Linux Using CrossoverChristmas Fun: De-Stress the Holidays with Online Flash Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • Adding my podcast to my Facebook fan page

    - by Donald Burr
    I've set up a Facebook fan page for my podcast, Otaku no Podcast. I'd love to add a Flash based player to the fan page that can play the latest episode of my podcast. Or, at the very least, a link to the latest episode on my website (which has its own Flash-based audio player). My podcast's website of course exports a valid RSS feed. I've tried several different podcast player/RSS feed display applications including Podcast Pickle (which has a facebook app), but none of them appear to work and/or are maintained any more. Podcast Pickle used to work for me a long time ago, but is no longer working for me. Any ideas?

    Read the article

  • links for 2010-04-09

    - by Bob Rhubart
    Brian Dayton: My Doors - Why Standards Matter to Business "My 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different." -- Brian Dayton (tags: oracle otn architecture businessalignment standards) Edwin Biemond: ADF Task Flow interaction with WebCenter Composer Oracle ACE Edwin Biemond of Whitehorses describes how to manage independent task flows at runtime with Oracle WebCenter Composer. (tags: otn oracle oracleace webcenter enterprise2.0) John Mead: Exadata in Retail Presentation Rittman Mead's John Mead shares slides describing a recent project: a custom data warehouse built on Exadata, populated by CDC with reporting delivered by OBIEE. (tags: oracle otn rittmanmead datawarehousing exadata obiee cdc) Where's The Line Between Architecting And Engineering? | Forrester Blogs Forrester's Gene Leganza answers the question "What is the difference between architecting and designing or, alternately, between architecture and engineering?" (tags: architecture engineering forrester)

    Read the article

  • Resources for the &ldquo;What&rsquo;s New in VS 2013&rdquo; Presentation

    - by John Alexander
    Originally posted on: http://geekswithblogs.net/jalexander/archive/2013/10/24/resources-for-the-ldquowhatrsquos-new-in-vs-2013rdquo-presentation.aspxThanks for attending the “What’s New in Visual Studio 2013 (and TFS too) presentation. As promised, here are some links! Note: if you didn’t attend, its ok. This is for you, too. The bits themselves.  This article introduces new and enhanced features in Visual Studio 2013 Visual Studio Virtual Launch – Lots of Videos here and and then on November 13th, live sessions and a q and a session… What features map to what Visual Studio editions Visual Studio 2013 New Editor Features Visual Studio 2013 Application Lifecycle Management Virtual Machine and Hands-on-Labs / Demo Scripts from Brian Keller More on CodeLens from Zain Naboulsi  What are Web Essentials? You can now download Web Essentials for Visual Studio 2013 RTM. A great overview on TFS 2013 from Brian Harry The release archive lists updates made to Team Foundation Service along with which version of Team Foundation Server the updates are a part of. REST API for Team Rooms  “What's new in Visual Studio for Web Developers and Front End Devs” screencasts – quick, easy and painless from the always awesome Scott Hanselman Introducing ASP.NET Identity –-- A membership system for ASP.NET applications Visual Studio 2013 Adds New Project Templates with Improvements and Social Accounts Authentication

    Read the article

  • ASP.NET in Moscow!

    - by Stephen Walther
    I’m traveling to Russia and speaking in Moscow next week at the DevConf. This will be the first time that I have visited Russia, and I know that there is a strong ASP.NET community in Russia, so I am very excited about the trip. I’m speaking at the DevConf (http://www.devconf.ru/). I don’t speak Russian, so the only words that I recognize off the home page of the conference website are ASP.NET and JavaScript (PHP, Perl, Python, and Ruby must be Russian words). I’m giving talks on both ASP.NET Web Forms and ASP.NET MVC: What’s New in ASP.NET 4 Web Forms Learn about the new features just released with ASP.NET 4 Web Forms and Visual Studio 2010 that enable you to be more productive and build better websites. Learn how to take control of your markup, client IDs, and view state. Learn how to take advantage of routing with Web Forms to make your websites more search engine friendly.   What’s New in ASP.NET MVC 2 Come learn about the new features being introduced with ASP.NET MVC 2. Templated helpers allow associating edit and display elements with data types automatically. Areas provide a means of dividing a large Web application into multiple projects. Data annotations allows attaching metadata attributes on a model to control validation. Client validation enables form field validation without the need to perform a roundtrip to the server. Learn how these new features enable you to be more productive when building ASP.NET MVC applications. Hope to see you at the conference next week!

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • ASP.NET MVC AND TOOLBOX

    - by imran_ku07
       Introduction :           ASP.NET MVC popularity is not hidden from the today's world of web applications. One of the great thing in ASP.NET is the separation of concerns, in which presentation views are separate from the business or modal layer. In these views ASP.NET MVC provides some very good controls which generate commonly used HTML markup fragments using a shorter syntax. These presentation views are familiar to web forms developers. But a pain for developers to use these controls is that they need to type these helpers controls every time when they need to use a control, because they are more familiar to drag and drop controls from ToolBox. So in this article i will use a cool feature of Visual Studio that allows you to add these controls in ToolBox once and then, when needed, just drag and drop controls from ToolBox, very similar like in web forms.   Description :            Visual Studio ToolBox is rich enough that allows you to store code and HTML snippets in ToolBox. All you need is select the HTML Helper and then simply drag and drop into Toolbox. Repeat this Procedure for every HTML Helper in ASP.NET MVC.             When you need to use a HTML Helper, you can drag and drop it from ToolBox and become happy with drag and drop programming. Summary :              In this article you see that how Visual Studio helps you to drag and drop HTML snippets from Design view to toolbox. This is one of the coolest features in Visual Studio.

    Read the article

  • New Java Champion: Michael Levin

    - by Tori Wieldt
    Welcome Michael Levin to Java Champion community! Michael is a JUG leader involved with Orlando, FL OrlandoJUG, the Gainesville, FL GatorJUG, the West African JUG SeneJUG and the New Orleans, LA CajunJUG. Michael is based in the USA. He is a business owner, and his business, Cambridge Web Design, Inc., specializes in custom software and Web2.0 website development (www.cambridgeweb.ie). He recently provided JCertif Java Training in Brazzaville, Republic of Congo. He also founded Codetown, an online community for software developers, located at www.codetown.us. He also has a tech podcast called Swampcast located at www.swampcast.com. You can follow him on Twitter @mikelevin.The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. This interchange may be in the form of technical discussions and/or community-building activities with Oracle's Java Development and Developer Program teams.Java Champions are:    •    leaders    •    technical luminaries    •    independent-minded and credible    •    involved with some really cool applications of Java Technology or some humanitarian or educational effort    •    able to evangelize or influence other developers Congratulations to Michael on becoming the latest Java Champion!

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >