Search Results

Search found 27011 results on 1081 pages for 'look and feel'.

Page 47/1081 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • What Works in Data Integration?

    - by dain.hansen
    TDWI just recently put out this paper on "What Works in Data Integration". I invite you especially to take a look at the section on "Accelerating your Business with Real-time Data Integration" and the DIRECTV case study. The article discusses some of the technology considerations for BI/DW and how data integration plays a role to deliver timely, accessible, and high-quality data. It goes on to outline the three key requirements for how to deliver high performance, low impact, and reliability and how that can translate to faster results. The DIRECTV webinar is something you definitely want to take a look at, you'll hear how DIRECTV successfully transformed their data warehouse investments into a competitive advantage with Oracle GoldenGate.

    Read the article

  • Introducing the New Face of Fusion Applications

    - by mvaughan
    By Misha Vaughan and Kathy Miedema, Oracle Applications User Experience At OpenWorld 2012, the Oracle Applications User Experience (UX) team unveiled the new face of Fusion Applications. You may have seen it in sessions presented by Chris Leone, Anthony Lye, Jeremy Ashley or others, or you may have gotten a look on the demogrounds. This screenshot shows the new Oracle Fusion Applications entry experience.Why are we delivering a new face for Fusion Applications? Because, says Ashley, the vice president of the Oracle Applications User Experience team, we want to provide a simple, modern, productive way for users to complete their top quick-entry tasks. The idea is to provide a clear, productive user experience that is backed by the full functionality of Fusion Applications. The first release of the new face of Fusion focuses on three types of users. It provides a fully functional gateway to Fusion Applications for: New and casual users who need quick access to self-service tasks Professional users who need fast access to quick-entry, high-volume tasks Users who are looking for a way to quickly brand their portal for employees The new face of Fusion allows users to move easily from navigation to action, Ashley said, and it has been designed for any device -- Mac, PC, iPad, Android, SmartBoard -- in the browser. The Oracle Fusion Applications Employee Directory. How did we build it? The new face of Fusion essentially is a custom shell, developed by the Apps UX team, and a set of page templates that embodies a simple design aesthetic. It’s repeatable, providing consistency across its pages, and requires little to zero training. More specifically, the new face of Fusion has been built on ADF. The Applications UX team created pages in JDeveloper using local tasks flows bound to existing view objects. Three new components were commissioned from ADF, and existing Fusion components were re-skinned to deliver a simple, modern user experience. It really is that simple – and to prove that point, we’ve been sharing our story around the new face of Fusion on several Oracle channels such as this one. Want to know more? Check the VoX blog for our favorite highlights from OpenWorld, which included demos of the new face of Fusion. And take a look at these posts from Ace Directors Debra Lilley, and Floyd Teter. Special mention to Floyd for the first screen shot credit. Also a nod to Wilfred vander Deijl for capturing the demo to share as part 1 and part 2. We will also be hitting upcoming user group conferences with our demos, and you can always reach out to one of our Fusion User Experience Advocates for a look.

    Read the article

  • Get a Silverlight XAP signing certificate for cheap thanks to GoDaddy

    One of the new features in Silverlight 4 is the ability to sign your XAP applications so that your out-of-browser trusted applications look more friendly (trusted) to your users, they come from a verified publisher, and they can take advantage of the auto-update APIs in Silverlight. If you dont know what Im talking about, heres some resources for some background: XAP Signing in Silverlight 4 HOWTO Video: Sign Silverlight 4 Applications Basically if you are writing a Silverlight 4 trusted...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Selling your services when you use uncommon technologies

    - by speeder
    I took a look in Stackoverflow most popular profiles, and then I did the same in several other sites, and then I took a look in job postings in several boards, mostly out of curiosity, because I noticed this: If you work with Java, .NET or other managed languages, or you work with stuff that is popular for web development (Ruby, JavaScript, etc...) you can get lots of points on Stackoverflow, find lots of jobs and clients, find forums, friends, colleagues, etc... But how a programmer of uncommon languages (Lua, pure C, Lisp, D, ADA, Haskell, etc...) find information, sell his services, and so on? EDIT: This also applies to fields: You work with web, corporate software, database, etc... it is great... You dislike those previous 3, noone ever will hire your services...

    Read the article

  • Web framework for IPad and common desktop browsers?

    - by Chris
    We are developing a web-based, commercial point-of-sale application. We'd like the same web site to work well on an Ipad as well a desktop browsers. We're looking for a web framework that makes the site look good on an IPad, but also makes the site work well in a desktop browser such as Chrome, IE, or Firefox on Windows or a Mac. I found quite a few at 18 Mobile Frameworks and Development Tools for Creating iPhone Apps Most of them, such as JQTouch, help a web site look and act more like a native IPhone application but they don't emphasize the cross platform/browser experience. The exception seems to be Sproutcore, which seems to be a full-fledged javascript MVC application framework. I did have trouble getting some of the demos to work under Chrome, but what did work looks good. What framework(s) have you actually used to develop web sites to work on an Ipad and desktop browser? If you didn't use a framework, how did you get it to work well under both environments?

    Read the article

  • Lost transparency in SDL surfaces drawn manually

    - by Christian Ivicevic
    I want to create SDL_Surface objects for each layer of my 2d tile-based map so that I have to render only one surface per layer rather than too many tiles. With normal tiles which do not have transparent areas this works well, however I am not able to create a SDL_Surface with transparent pixels everywhere to be able to draw some tiles on specific parts which should be visible (I do NOT want the whole surface to appear with a specific opacity - I want to create overlaying tiles where one can look through). Currently I am creating my layers like this to draw with SDL_BlitSurface on them: SDL_Surface* layer = SDL_CreateRGBSurface( SDL_HWSURFACE | SDL_SRCALPHA, layerWidth, layerHeight, 32, 0, 0, 0, 0); If you have a look at this screenshot I have provided here you can see that the bottom layer with no transparent parts gets rendered correctly. However the overlay with the tree tile (which is transparent in the top left corner) is drawn own its own surface which is black and not transparent as expected. The expected result (concerning the transparency) can be seen here Can anyone explain me how to handle surfaces which are actually transparent rather than drawing all my overlay tiles separately?

    Read the article

  • Looking for enterprise web application design inspiration [closed]

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • I thought everyone did it like this – Training Session Code Management

    - by Fatherjack
    One of an occasional series of blogs about things that I do that perhaps others don’t. From very early on in my dealings with SQL Server Management Studio I started using Solutions and Projects. This means that I started using them when writing sessions and it wasn’t until speaking with someone at PASS Summit 2013 that I found out that this was a process that was unheard of by some people. So, here we go, a run through how I create and manage code and other documents that I use in presentations. For people unsure what solutions and projects are; • Solution – a container for one or more projects. • Project – a container for files, .sql files are grouped as Queries, all other files are stored as Misc. How do I start? Open Management Studio as normal, and then click File | New and select Project This will bring up the New Project dialog box and you can select/add details as necessary in the places indicated. If this is the first project you are creating then be sure to select the Create directory for solution check box (4). If know in advance that you are going to have more than one project in the solution then you may want to edit the Solution name (3) as by default it will take the name of the project that you enter at (2). This will lead you to the following folder structure (depending on the location that you chose in 3) above. In SSMS you need to turn on the Solution Explorer, either via the View menu or pressing Ctrl + Alt + L                   This will bring up a dockable window that will let you quickly access the files that you choose to include in the Solution.                     Can we get to work and write some code yet please? Yes, we can. As with many Microsoft products there are several ways to go about this, let’s look at the easiest way when creating new code. When writing a presentation I usually start from the position we are currently in – a brand new solution and project with no code. Later on we will look at incorporating existing code files into the Project where we need it. Right-click on the Project name and choose Add New Query           As soon as you click this you will be prompted to select the sql server that you want to connect to and once you have done that you will have your new query open in the text editor and the Solution Explorer will now look like this, showing your server connection and your new query.               And the Project folder will look like this         Now once you have written your code don’t press save, choose Save As and give the code a better name than QueryX.sql. SSMS will interpret this as a request to rename Query1 and your Project and the Project folder will show that SQLQuery1.sql no longer exists but there is now a file named as you requested. If you happen to click save in error then right-click the query in the project and choose rename.               You can then alter the name as you like, even when open in the SSMS text editor, and the file will be renamed. When creating a set of scripts for a presentation I name files with a numeric prefix so that when they are sorted by name they are in the order that I need to use them during the session. I love this idea but I’ve got loads of existing scripts I want to put in Projects Excellent, adding existing files to a project is easy, let’s consider that you have query files in your My Documents folder and you want to bring them into the Project we have just created. Right-click on the Project and choose Add | Existing Item           Navigate to the location of your chosen file and select it. The file will open in SSMS text editor and the Project will be updated to show that the selected query is now part of your project. If you look in Windows Explorer you will see that the query file has been copied into the Project folder, the original file still remains in your My Documents (or wherever it existed). I’ll leave it as an exercise for the reader to explore creating further Projects within a solution but will happily answer questions if you get into difficulties. What other advantages do I get from this? Well, as all your code is neatly in one Solution folder and the folder contains only files that are pertinent to the session you are presenting then it makes it very easy to share this code, simply copy the whole folder onto a USB stick, Blog, FTP location, wherever you choose and it’s all there in one self-contained parcel. You don’t have to limit yourself to .sql query files, you can add any sort of document via the Add Existing Item method, just try it out. Right-click on the protect and choose Add | Existing Item           Change the file type filter.                       You can multi select items here using Ctrl as you click each item you want. When you are done, click the Add button and the items will be brought into your project.                 Again, using this process means the files are copied into the project folder, leaving you original files untouched in their original location. Once they are here you can double click them in the SSMS Solution Explorer to open them, for files with a specific file type then the appropriate application will be launched – ie Word, Excel etc. However, if the files are something that the SSMS Text editor can display then they will open in a tab in SSMS. Try it out with a text file or even a PS1 file … This sounds excellent but what do I need to watch out for? One big thing to consider when working like this is the version of SSMS that you are using. There is something fundamentally different between the different versions in the way that the project (.ssmssqlproj) and solution (.sqlsuo and .ssmssln) files are formatted. If you create a solution in an older version of SSMS and then open it in a newer version you will be given the option to upgrade it. Once you do this upgrade then the older version of SSMS will not be able to open the solution any more. Now this ranks as more of an annoyance than disaster as the files within the projects are not affected in any way, you would just have to delete the files mentioned and recreate the solution in the older version again. Summary So, here we have seen how using SSMS Projects and Solutions can help keep related code files (and other document types) together in a neat structure so that they can be quickly navigated during a presentation and it also makes it incredibly simple to distribute your code and share it with others. I hope this is of use to you and helps you bring more order into your sql files, whether you are a person that does technical presentations or not, having your code grouped and managed can make for a lot of advantages as your code library expands.  

    Read the article

  • Are 'edited by' inline comments the norm in shops which use revision control?

    - by Joshua Smith
    The senior dev in our shop insists that whenever code is modified, the programmer responsible should add an inline comment stating what he did. These comments usually look like // YYYY-MM-DD <User ID> Added this IF block per bug 1234. We use TFS for revision control, and it seems to me that comments of this sort are much more appropriate as check-in notes rather than inline noise. TFS even allows you to associate a check-in with one or more bugs. Some of our older, often-modified class files look like they have a comment-to-LOC ratio approaching 1:1. To my eyes, these comments make the code harder to read and add zero value. Is this a standard (or at least common) practice in other shops?

    Read the article

  • Content, MetaData and Taxonomy 1 Taxonomy Manager

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. But first lets take a look at how the pieces work together....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there a very simple XML Editor/Viewer?

    - by Paul Spangle
    Part of our support job is to look at XML documents sent to our systems by our clients that have failed to load correctly. I do this by pasting the file in to Visual Studio as an XML file and VS colours it and adds line-breaks, indenting, etc. and it's then usually quite simple to spot the error and manually fix it and resubmit the document. Do I really have to use Visual Studio to do this? OK, it does the job quite well, but I'd have thought that something like Notepad++ would do the job just as well with a fraction of the machine resource usage. Is there are plugin for Notepad++ or another bit of freeware that can make XML look pretty and let me make simple edits?

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Custom Theming now Available in Gmail

    - by Asian Angel
    This past November Google unveiled a new look for Gmail with HD themes, but you could not set up custom themes until now. Set up your new custom theme with a Light or Dark look to match up nicely with your chosen background and enjoy a more personalized experience in your inbox. This is where you will find the new custom settings on the Themes Settings Page… The confirmation screens for the new Light and Dark Custom Themes… How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • What's your advise on a potential legal suit? [closed]

    - by ohho
    I [xxx app developer] received an email from Apple that a developer [of yyy app] believes I am "infringing their copyright." Description of Issue: [xxx developer] copied my application (my application is [yyy]) feature by feature. Even their donation model is completely copied from my application. Their first release was significantly later than mine, which implies copying of the application rather than parallel development. I suffered significant financial losses because of their actions, in additional to promotion problems as many people are confused with their product. My advertising was based around the idea of a "free [yyy] application for an iPhone" and they have just taken that as a title for their application. I would appreciate if someone takes a look at their release schedule and compare it to my releases. Additionally, please take a look at their functionality and how it point by point copies the functionality of my older releases. I am asking Apple to remove their application from the App Store, and ban them from resubmitting it. Thank you for your time! [yyy developer], the developer of the [yyy] application. My response was: The code of [xxx] is written by myself, using Apple public API. The graphics elements are designed by myself. The user interface and app control are independently designed and different from other [similar type] apps (please judge yourself). In-app Purchase is iOS Apple standard API. iAd is Apple iOS standard API. I don't think features can be owned exclusively. In fact, my app comes with fewer features, as I prefer minimalist design. I don't think idea can be owned exclusively. Apple responded: Thank you for your response. Unfortunately, Apple cannot serve as arbiter for disputes among third parties. Please contact [yyy developer] directly regarding your actions. You can reach [yyy developer] through: [...]. We look forward to confirmation from both parties that this issue has been resolved. If this issue is not resolved shortly, Apple may be forced to pull your application(s) from the App Store. Then I sent my response above to [yyy developer]. [yyy developer] then asked me "to provide (my) legal address and contact details that (his) lawyer requires to file a copyright infringement suit." IMO, I don't think the [yyy developer]'s claim on "feature by feature" copy is valid. I have fewer features, completely different user interface design. However, I don't think I can afford a legal action for an app of so little financial return. So what's your advise on this? Should I just let Apple pull my app? Or is there any alternative I can consider? FYI ... UI of [xxx app]: and UI of [yyy app]:

    Read the article

  • Binding a select in a client template

    I recently got a question on one of my client template posts asking me how to bind a select tags value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought Id write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Binding a select in a client template

    I recently got a question on one of my client template posts asking me how to bind a select tags value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought Id write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Looking for enterprise web application design inspiration

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • Free SEO Analysis using IIS SEO Toolkit

    In my spare time Ive been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites. So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >