Search Results

Search found 6941 results on 278 pages for 'drupal views'.

Page 256/278 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Innodb Queries Slow

    - by user105196
    I have redHat 5.3 (Tikanga) with Mysql 5.0.86 configued with RIAD 10 HW, I run an application inquiries from Mysql/InnoDB and MyIsam tables, the queries are super fast,but some quires on Innodb tables sometime slow down and took more than 1-3 seconds to run and these queries are simple and optimized, this problem occurred just on innodb tables in different time with random queries. Why is this happening only to Innodb tables? the below is the Innodb status and some Mysql variables: show innodb status\G ************* 1. row ************* Status: 120325 10:54:08 INNODB MONITOR OUTPUT Per second averages calculated from the last 19 seconds SEMAPHORES OS WAIT ARRAY INFO: reservation count 22943, signal count 22947 Mutex spin waits 0, rounds 561745, OS waits 7664 RW-shared spins 24427, OS waits 12201; RW-excl spins 1461, OS waits 1277 TRANSACTIONS Trx id counter 0 119069326 Purge done for trx's n:o < 0 119069326 undo n:o < 0 0 History list length 41 Total number of lock structs in row lock hash table 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 29093, OS thread id 1166043456 MySQL thread id 703985, query id 5807220 localhost root show innodb status FILE I/O I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 132777 OS file reads, 689086 OS file writes, 252010 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s INSERT BUFFER AND ADAPTIVE HASH INDEX Ibuf: size 1, free list len 366, seg size 368, 62237 inserts, 62237 merged recs, 52881 merges Hash table size 8850487, used cells 3698960, node heap has 7061 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s LOG Log sequence number 15 3415398745 Log flushed up to 15 3415398745 Last checkpoint at 15 3415398745 0 pending log writes, 0 pending chkp writes 218214 log i/o's done, 0.00 log i/o's/second BUFFER POOL AND MEMORY Total memory allocated 4798817080; in additional pool allocated 12342784 Buffer pool size 262144 Free buffers 101603 Database pages 153480 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 151954, created 1526, written 494505 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout ROW OPERATIONS 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 29093, id 1162049856, state: waiting for server activity Number of rows inserted 77675, updated 85439, deleted 0, read 14377072495 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s END OF INNODB MONITOR OUTPUT 1 row in set, 1 warning (0.02 sec) read_buffer_size = 128M sort_buffer_size = 256M tmp_table_size = 1024M innodb_additional_mem_pool_size = 20M innodb_log_file_size=10M innodb_lock_wait_timeout=100 innodb_buffer_pool_size=4G join_buffer_size = 128M key_buffer_size = 1G can any one help me ?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • May 20th Links: ASP.NET MVC, ASP.NET, .NET 4, VS 2010, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series and ASP.NET MVC 2 series for other on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET MVC How to Localize an ASP.NET MVC Application: Michael Ceranski has a good blog post that describes how to localize ASP.NET MVC 2 applications. ASP.NET MVC with jTemplates Part 1 and Part 2: Steve Gentile has a nice two-part set of blog posts that demonstrate how to use the jTemplate and DataTable jQuery libraries to implement client-side data binding with ASP.NET MVC. CascadingDropDown jQuery Plugin for ASP.NET MVC: Raj Kaimal has a nice blog post that demonstrates how to implement a dynamically constructed cascading dropdownlist on the client using jQuery and ASP.NET MVC. How to Configure VS 2010 Code Coverage for ASP.NET MVC Unit Tests: Visual Studio enables you to calculate the “code coverage” of your unit tests.  This measures the percentage of code within your application that is exercised by your tests – and can give you a sense of how much test coverage you have.  Gunnar Peipman demonstrates how to configure this for ASP.NET MVC projects. Shrinkr URL Shortening Service Sample: A nice open source application and code sample built by Kazi Manzur that demonstrates how to implement a URL Shortening Services (like bit.ly) using ASP.NET MVC 2 and EF4.  More details here. Creating RSS Feeds in ASP.NET MVC: Damien Guard has a nice post that describes a cool new “FeedResult” class he created that makes it easy to publish and expose RSS feeds from within ASP.NET MVC sites. NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2: Nice two-part blog series by Shiju Varghese on how to use MongoDB (a document database) with ASP.NET MVC.  If you are interested in document databases also make sure to check out the Raven DB project from Ayende. Using the FCKEditor with ASP.NET MVC: Quick blog post that describes how to use FCKEditor – an open source HTML Text Editor – with ASP.NET MVC. ASP.NET Replace Html.Encode Calls with the New HTML Encoding Syntax: Phil Haack has a good blog post that describes a useful way to quickly update your ASP.NET pages and ASP.NET MVC views to use the new <%: %> encoding syntax in ASP.NET 4.  I blogged about the new <%: %> syntax – it provides an easy and concise way to HTML encode content. Integrating Twitter into an ASP.NET Website using OAuth: Scott Mitchell has a nice article that describes how to take advantage of Twiter within an ASP.NET Website using the OAuth protocol – which is a simple, secure protocol for granting API access. Creating an ASP.NET report using VS 2010 Part 1, Part 2, and Part 3: Raj Kaimal has a nice three part set of blog posts that detail how to use SQL Server Reporting Services, ASP.NET 4 and VS 2010 to create a dynamic reporting solution. Three Hidden Extensibility Gems in ASP.NET 4: Phil Haack blogs about three obscure but useful extensibility points enabled with ASP.NET 4. .NET 4 Entity Framework 4 Video Series: Julie Lerman has a nice, free, 7-part video series on MSDN that walks through how to use the new EF4 capabilities with VS 2010 and .NET 4.  I’ll be covering EF4 in a blog series that I’m going to start shortly as well. Getting Lazy with System.Lazy: System.Lazy and System.Lazy<T> are new features in .NET 4 that provide a way to create objects that may need to perform time consuming operations and defer the execution of the operation until it is needed.  Derik Whittaker has a nice write-up that describes how to use it. LINQ to Twitter: Nifty open source library on Codeplex that enables you to use LINQ syntax to query Twitter. Visual Studio 2010 Using Intellitrace in VS 2010: Chris Koenig has a nice 10 minute video that demonstrates how to use the new Intellitrace features of VS 2010 to enable DVR playback of your debug sessions. Make the VS 2010 IDE Colors look like VS 2008: Scott Hanselman has a nice blog post that covers the Visual Studio Color Theme Editor extension – which allows you to customize the VS 2010 IDE however you want. How to understand your code using Dependency Graphs, Sequence Diagrams, and the Architecture Explorer: Jennifer Marsman has a nice blog post describes how to take advantage of some of the new architecture features within VS 2010 to quickly analyze applications and legacy code-bases. How to maintain control of your code using Layer Diagrams: Another great blog post by Jennifer Marsman that demonstrates how to setup a “layer diagram” within VS 2010 to enforce clean layering within your applications.  This enables you to enforce a compiler error if someone inadvertently violates a layer design rule. Collapse Selection in Solution Explorer Extension: Useful VS 2010 extension that enables you to quickly collapse “child nodes” within the Visual Studio Solution Explorer.  If you have deeply nested project structures this extension is useful. Silverlight and Windows Phone 7 Building a Simple Windows Phone 7 Application: A nice tutorial blog post that demonstrates how to take advantage of Expression Blend to create an animated Windows Phone 7 application. If you haven’t checked out my Windows Phone 7 Twitter Tutorial I also recommend reading that. Hope this helps, Scott P.S. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • ViewBag dynamic in ASP.NET MVC 3 - RC 2

    - by hajan
    Earlier today Scott Guthrie announced the ASP.NET MVC 3 - Release Candidate 2. I installed the new version right after the announcement since I was eager to see the new features. Among other cool features included in this release candidate, there is a new ViewBag dynamic which can be used to pass data from Controllers to Views same as you use ViewData[] dictionary. What is great and nice about ViewBag (despite the name) is that its a dynamic type which means you can dynamically get/set values and add any number of additional fields without need of strongly-typed classes. In order to see the difference, please take a look at the following examples. Example - Using ViewData Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");                 ViewData["listColors"] = colors;     ViewData["dateNow"] = DateTime.Now;     ViewData["name"] = "Hajan";     ViewData["age"] = 25;     return View(); } View (ASPX View Engine) <p>     My name is     <b><%: ViewData["name"] %></b>,     <b><%: ViewData["age"] %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewData["listColors"] as List<string>){ %>     <li>        <font color="<%: color %>"><%: color %></font>    </li> <% } %> </ul> <p>     <%: ViewData["dateNow"] %> </p> (I know the code might look cleaner with Razor View engine, but it doesn’t matter right? ;) ) Example - Using ViewBag Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");     ViewBag.ListColors = colors; //colors is List     ViewBag.DateNow = DateTime.Now;     ViewBag.Name = "Hajan";     ViewBag.Age = 25;     return View(); } You see the difference? View (ASPX View Engine) <p>     My name is     <b><%: ViewBag.Name %></b>,     <b><%: ViewBag.Age %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewBag.ListColors) { %>     <li>         <font color="<%: color %>"><%: color %></font>     </li> <% } %> </ul> <p>     <%: ViewBag.DateNow %> </p> In my example now I don’t need to cast ViewBag.ListColors as List<string> since ViewBag is dynamic type! On the other hand the ViewData[“key”] is object.I would like to note that if you use ViewData["ListColors"] = colors; in your Controller, you can retrieve it in the View by using ViewBag.ListColors. And the result in both cases is Hope you like it! Regards, Hajan

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Windows Phone 7 development: first impressions

    - by DigiMortal
    After hard week in work I got some free time to play with Windows Phone 7 CTP developer tools. Although my first test application is still unfinished I think it is good moment to share my first experiences to you. In this posting I will give you quick overview of Windows Phone 7 developer tools from developer perspective. If you are familiar with Visual Studio 2010 then you will feel comfortable because Windows Phone 7 CTP developer tools base on Visual Studio 2010 Express. Project templates There are five project templates available. Three of them are based on Silverlight and two on XNA Game Studio: Windows Phone Application (Silverlight) Windows Phone List Application (Silverlight) Windows Phone Class Library (Silverlight) Windows Phone Game (XNA Game Studio) Windows Phone Game Library (XNA Game Studio) Currently I am writing to test applications. One of them is based on Windows Phone Application and the other on Windows Phone List Application project template. After creating these projects you see the following views in Visual Studio. Windows Phone Application. Click on image to enlarge. Windows Phone List Application. Click on image to enlarge.  I suggest you to use some of these templates to get started more easily. Windows Phone 7 emulator You can run your Windows Phone 7 applications on Windows Phone 7 emulator that comes with developer tools CTP. If you run your application then emulator is started automatically and you can try out how your application works in phone-like emulator. You can see screenshot of emulator on right. Currently there is opened Windows Phone List Application as it is created by default. Click on image to enlarge it. Emulator is a little bit slow and uncomfortable but it works pretty well. This far I have caused only couple of crashes during my experiments. In these cases emulator works but Visual Studio gets stuck because it cannot communicate with emulator. One important note. Emulator is based on virtual machine although you can see only phone screen and options toolbar. If you want to run emulator you must close all virtual machines running on your machine and run Visual Studio 2010 as administrator. Once you run emulator you can keep it open because you can stop your application in Visual Studio, modify, compile and re-deploy it without restarting emulator. Designing user interfaces You can design user interface of your application in Visual Studio. When you open XAML-files it is displayed in window with two panels. Left panel shows you device screen and works as visual design environment while right panel shows you XAML mark-up and let’s you modify XML if you need it. As it is one of my very first Silverlight applications I felt more comfortable with XAML editor because property names in property boxes of visual designer confused me a little bit. Designer panel is not very good because it is visually hard to follow. It has black background that makes dark borders of controls very hard to see. If you have monitor with very high contrast then it is may be not a real problem. I have usual monitor and I have problem. :) Putting controls on design surface, dragging and resizing them is also pretty painful. Some controls are drawn correctly but for some controls you have to set width and height in XML so they can be resized. After some practicing it is not so annoying anymore. On the right you can see toolbox with some controllers. This is all you get out of the box. But it is sufficient to get started. After getting some experiences you can create your own controls or use existing ones from other vendors or developers. If it is your first time to do stuff with Silverlight then keep Google open – you need it hard. After getting over the first shock you get the point very quickly and start developing at normal speed. :) Writing source code Writing source code is the most familiar part of this action. Good old Visual Studio code editor with all nice features it has. But here you get also some surprises: The anatomy of Silverlight controls is a little bit different than the one of user controls in web and forms projects. Windows Phone 7 doesn’t run on full version of Windows (I bet it is some version of Windows CE or something like this) then there is less system classes you can use. Some familiar classes have less methods that in full version of .NET Framework and in these cases you have to write all the code by yourself or find libraries or source code from somewhere. These problems are really not so much problems than limitations and you get easily over them. Conclusion Windows Phone 7 CTP developer tools help you do a lot of things on Windows Phone 7. Although I expected better performance from tools I think that current performance is not a problem. This far my first test project is going very well and Google has answer for almost every question. Windows Phone 7 is mobile device and therefore it has less hardware resources than desktop computers. This is why toolset is so limited. The more you need memory the more slower is device and as you may guess it needs the more battery. If you are writing apps for mobile devices then make your best to get your application use as few resources as possible and act as fast as possible.

    Read the article

  • ASP.NET MVC 3 Hosting :: ASP.NET MVC 3 First Look

    - by mbridge
    MVC 3 View Enhancements MVC 3 introduces two improvements to the MVC view engine: - Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”): - Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax. MVC 3 Control Enhancements - Global Filters: ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method: void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleLoggingAttribute()); filters.Add(new HandleErrorAttribute()); } void Application_Start() { RegisterGlobalFilters (GlobalFilters.Filters); } - Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API. Public ActionResult Index() { ViewModel.Message = "Hello World"; return View(); } - New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods: 1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client. 2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent(), RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true. 3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description. MVC 3 AJAX and JavaScript Enhancements MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters. For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server: $('#save').click(function () { var product = { ProdName: $('#Name').val() Price: $('#Price').val(), } $.ajax({ url: '/theStore/UpdateProduct', type: "POST"; data: JSON.stringify(widget), datatype: "json", contentType: "application/json; charset=utf-8", success: function () { $('#message').html('Saved').fadeIn(), }, error: function () { $('#message').html('Error').fadeIn(), } }); return false; }); MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding. [HttpPost] public ActionResult UpdateProduct(Product product) { // save logic here return null } MVC 3 Model Validation Enhancements MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0: - Support for the new DataAnnotations metadata attributes like DisplayAttribute. - Support for the improvements made to the ValidationAttribute class which now supports a new IsValid overload that provides more info on  the current validation context, like what object is being validated. - Support for the new IValidatableObject interface which enables you to perform model-level validation and also provide validation error messages which are specific to the state of the overall model. MVC 3 Dependency Injection Enhancements MVC 3 includes better support for applying Dependency Injection (DI) and also integrating with Dependency Injection/IOC containers. Currently MVC 3 Preview 1 has support for DI in the below places: - Controllers (registering & injecting controller factories and injecting controllers) - Views (registering & injecting view engines, also for injecting dependencies into view pages) - Action Filters (locating and  injecting filters) And this is another important blog about Microsoft .NET and technology: - Windows 2008 Blog - SharePoint 2010 Blog - .NET 4 Blog And you can visit here if you're looking for ASP.NET MVC 3 hosting

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • Some New .NET Toys (Repost)

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit. It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft. The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of. The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today. Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1. You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)? Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox). With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target. Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks. Other projects can now reference this portable library and be provided assemblies custom built to their environment. This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight. You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model. Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here. You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/. This site allows you to easily search for small samples of code related to a particular technology or platform. If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs. You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others. Samples are packaged into the VS .vsix format and include any necessary references/dependencies. By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done. Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need. I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities. Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results. First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from. For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings. All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • .NET vs Windows 8

    - by Simon Cooper
    So, day 1 of DevWeek. Lots and lots of Windows 8 and WinRT, as you would expect. The keynote had some actual content in it, fleshed out some of the details of how your apps linked into the Metro infrastructure, and confirmed that there would indeed be an enterprise version of the app store available for Metro apps.) However, that's, not what I want to focus this post on. What I do want to focus on is this: Windows 8 does not make .NET developers obsolete. Phew! .NET in the New Ecosystem In all the hype around Windows 8 the past few months, a lot of developers have got the impression that .NET has been sidelined in Windows 8; C++ and COM is back in vogue, and HTML5 + JavaScript is the New Way of writing applications. You know .NET? It's yesterday's tech. Enter the 21st Century and write <div>! However, after speaking to people at the conference, and after a couple of talks by Dave Wheeler on the innards of WinRT and how .NET interacts with it, my views on the coming operating system have changed somewhat. To summarize what I've picked up, in no particular order (none of this is official, just my sense of what's been said by various people): Metro apps do not replace desktop apps. That is, Windows 8 fully supports .NET desktop applications written for every other previous version of Windows, and will continue to do so in the forseeable future. There are some apps that simply do not fit into Metro. They do not fit into the touch-based paradigm, and never will. Traditional desktop support is not going away anytime soon. The reason Silverlight has been hidden in all the Metro hype is that Metro is essentially based on Silverlight design principles. Silverlight developers will have a much easier time writing Metro apps than desktop developers, as they would already be used to all the principles of sandboxing and separation introduced with Silverlight. It's desktop developers who are going to have to adapt how they work. .NET + XAML is equal to HTML5 + JS in importance. Although the underlying WinRT system is built on C++ & COM, most application development will be done either using .NET or HTML5. Both systems have their own wrapper around the underlying WinRT infrastructure, hiding the implementation details. The CLR is unchanged; it's still the .NET 4 CLR, running IL in .NET assemblies. The thing that changes between desktop and Metro is the class libraries, which have more in common with the Silverlight libraries than the desktop libraries. In Metro, although all the types look and behave the same to callers, some of the core BCL types are now wrappers around their WinRT equivalents. These wrappers are then enhanced using standard .NET types and code to produce the Metro .NET class libraries. You can't simply port a desktop app into Metro. The underlying file IO, network, timing and database access is either completely different or simply missing. Similarly, although the UI is programmed using XAML, the behaviour of the Metro XAML is different to WPF or Silverlight XAML. Furthermore, the new design principles and touch-based interface for Metro applications demand a completely new UI. You will be able to re-use sections of your app encapsulating pure program logic, but everything else will need to be written from scratch. Microsoft has taken the opportunity to remove a whole raft of types and methods from the Metro framework that are obsolete (non-generic collections) or break the sandbox (synchronous APIs); if you use these, you will have to rewrite to use the alternatives, if they exist at all, to move your apps to Metro. If you want to write public WinRT components in .NET, there are some quite strict rules you have to adhere to. But the compilers know about these rules; you can write them in C# or VB, and the compilers will tell you when you do something that isn't allowed and deal with the translation to WinRT metadata rather than .NET assemblies. It is possible to write a class library that can be used in Metro and desktop applications. However, you need to be very careful not to use types that are available in one but not the other. One can imagine developers writing their own abstraction around file IO and UIs (MVVM anyone?) that can be implemented differently in Metro and desktop, but look the same within your shared library. So, if you're a .NET developer, you have a lot less to worry about. .NET is a viable platform on Metro, and traditional desktop apps are not going away. You don't have to learn HTML5 and JavaScript if you don't want to. Hurray!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • using Unity Android In a sub view and add actionbar and style

    - by aeroxr1
    I exported a simple animation from Unity3D (version 4.5) in android project. With eclipse I modified the manifest and added another activity. In this activity I put a button that it makes start the animation,and this is the result. The action bar appear in the main activity but it doesn't in the unity's activity :( How can I add the action bar and the style of the first activity to unity's animation activity ? This is the unity's activity's code : package com.rabidgremlin.tut.redcube; import android.app.NativeActivity; import android.content.res.Configuration; import android.graphics.PixelFormat; import android.os.Bundle; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; import android.view.ViewGroup; import android.view.Window; import android.view.WindowManager; import com.unity3d.player.UnityPlayer; public class UnityPlayerNativeActivity extends NativeActivity { protected UnityPlayer mUnityPlayer; // don't change the name of this variable; referenced from native code // Setup activity layout @Override protected void onCreate (Bundle savedInstanceState) { //requestWindowFeature(Window.FEATURE_NO_TITLE); super.onCreate(savedInstanceState); getWindow().takeSurface(null); //setTheme(android.R.style.Theme_NoTitleBar_Fullscreen); getWindow().setFormat(PixelFormat.RGB_565); mUnityPlayer = new UnityPlayer(this); /*if (mUnityPlayer.getSettings ().getBoolean ("hide_status_bar", true)) getWindow ().setFlags (WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); */ setContentView(mUnityPlayer); mUnityPlayer.requestFocus(); } // Quit Unity @Override protected void onDestroy () { mUnityPlayer.quit(); super.onDestroy(); } // Pause Unity @Override protected void onPause() { super.onPause(); mUnityPlayer.pause(); } // eliminiamo questa onResume() e proviamo a modificare la onResume() // Resume Unity @Override protected void onResume() { super.onResume(); mUnityPlayer.resume(); } // inseriamo qualche modifica qui // This ensures the layout will be correct. @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); mUnityPlayer.configurationChanged(newConfig); } // Notify Unity of the focus change. @Override public void onWindowFocusChanged(boolean hasFocus) { super.onWindowFocusChanged(hasFocus); mUnityPlayer.windowFocusChanged(hasFocus); } // For some reason the multiple keyevent type is not supported by the ndk. // Force event injection by overriding dispatchKeyEvent(). @Override public boolean dispatchKeyEvent(KeyEvent event) { if (event.getAction() == KeyEvent.ACTION_MULTIPLE) return mUnityPlayer.injectEvent(event); return super.dispatchKeyEvent(event); } // Pass any events not handled by (unfocused) views straight to UnityPlayer @Override public boolean onKeyUp(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onTouchEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } /*API12*/ public boolean onGenericMotionEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } } And this is the AndroidManifest.xml android:versionCode="1" android:versionName="1.0" > <!-- android:theme="@android:style/Theme.NoTitleBar"--> <supports-screens android:anyDensity="true" android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:xlargeScreens="true" /> <application android:icon="@drawable/app_icon" android:label="@string/app_name" android:theme="@android:style/Theme.Holo.Light" > <activity android:name="com.rabidgremlin.tut.redcube.UnityPlayerNativeActivity" android:configChanges="mcc|mnc|locale|touchscreen|keyboard|keyboardHidden|navigation|orientation|screenLayout|uiMode|screenSize|smallestScreenSize|fontScale" android:label="@string/app_name" android:screenOrientation="portrait" > <!--android:launchMode="singleTask"--> <meta-data android:name="unityplayer.UnityActivity" android:value="true" /> <meta-data android:name="unityplayer.ForwardNativeEventsToDalvik" android:value="false" /> </activity> <activity android:name="com.rabidgremlin.tut.redcube.MainActivity" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="17" android:targetSdkVersion="19" /> <uses-feature android:glEsVersion="0x00020000" /> </manifest>

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >