Search Results

Search found 32429 results on 1298 pages for 'project layout'.

Page 935/1298 | < Previous Page | 931 932 933 934 935 936 937 938 939 940 941 942  | Next Page >

  • Organising asp.net website development process

    - by ZX12R
    Is there a standard practice to organize the process of developing a simple website. there is no use implementing MVC as there is no data base involved. It will be very useful in organizing the project and separating the aspx files and master page content(this can be very useful in implementing simple cms techniques) user controls scripts styles images is there any industry standard or best practice for this.? thanks in advance :) Update: yes the way i have listed is convenient. but it would be great if i could separate server codes and files like master,aspx.. and the actual page content. One more reason for not using MVC: I usually outsource the SEO process. Now an MVC application can be greek/latin for my SEO expert. :)

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • 404 Error - HEAD request on default page

    - by Matt
    I am working on a project where we are about to go to internal release. So we are working to clean up the small problems before then. I was looking at our logs and noticed a high number of 404 errors. On further inspection it seems that all of them are related to HEAD requests. I haven't been able to find any substantive information about the preferred method for handling this in a standards compliant manner. Is there anything out there that can point out the proper way to handle that.

    Read the article

  • How do I share a WiX fragment in two WiX projects?

    - by Randy Eppinger
    We have a WiX fragment in a file SomeDialog.wxs that prompts the user for some information. It's referenced in another fragment in InstallerUI.wxs file that controls the dialog order. Of course, Product.wxs is our main file. Works great. Now I have a second Visual Studio 2008 Wix 3.0 Project for the .MSI of another application and it needs to ask the user for the same information. I can't seem to figure out the best way to share the file so that changing the information requested will result in both .MSIs getting the new behavior. I honestly can't tell if a merge module, an .wsi (include) or a .wixlib is the right solution. I would have hoped to find a simple example of someone doing this but I have failed thus far. Edit: Based on Rob Mensching's wixlib blog entry, a wixlib may be the answer, but I am still searching for an example of how to do this.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • dcommit to SVN in 1 commit after cherry-picking in git

    - by DJ
    I would like to know if there is a clean way to do git-svn dcommit of multiple local commits as 1 commit into subversion. The situation that I have is I am cherry picking some bug fixes changes from our trunk into the maintenance branch. The project preference is to have the bug fixes to be committed as 1 commit in subversion, but I would like to keep the history of changes that I had cherry-picked on my local git for references. Currently what I do is to do all cherry-picking on branch X and then do a squash merge into new branch Y. The dcommit will then be done from branch Y. Is there a better way to do it without using an intermediary branch?

    Read the article

  • Delphi 2010 Wide functions vs. String functions

    - by Mick
    We're currently converting a Delphi 2007 project to Delphi 2010. We were already using Unicode (via WideStrings and TNT Unicode Controls). I was expecting to replace all Wide functions, e.g. WideUpperCase, with their equivalent, e.g. UpperCase, but they do not work the same way. For example, WideUpperCase works differently from UpperCase. WideUpperCase correctly uppercases Campañas, but UpperCase leaves the ñ in lower case. Are there any other differences that I should be aware of? e.g. do WideFormat and Format work the same? Thanks

    Read the article

  • Handling redirected URL within Flex app?

    - by fortpointuiguy
    We have a Flex client and a server that is using the Spring/Blazeds project. After the user logs in and is authenticated, the spring security layer sends a redirect to a new URL which is where our main application is located. However, within the flex client, I'm currently using HTTPService for the initial request and I get the redirected page sent back to me in its entirety. How can I just get the URL so that I can use navigatetourl to get where the app to go where it needs to? Any help would greatly be appreciated. Thanks!

    Read the article

  • GDI & Hardware Cursor

    - by Abhi
    Dear All I am working on iMX51 project. I want to know what does GDI and hardware cursor means? The RTOS which i am using is WINCE 6.0 r3. We are actually looking to speed up the GDI and to implement the hardware cursor. So for that i want to know abt the GDI and the Hardware Cursor. I am also referring WC600_MX51_SDK_0912_ReferenceManual.pdf & MCIMX51RM.pdf and in these pdf i came to know that the hardware cursor is related to Display Processor module . But still i am unclear, what exactly does speed up of GDI means & Hardware cursor means? Please guide me the correct step to how to achieve my target....

    Read the article

  • How to create makefile for Lazarus projects?

    - by Gustavo Carreno
    After doing a light search on the Lazarus site I've come to the conclusion that this question has been asked some times but I haven't found an answer, so I'll ask my SO peers. Is there a a way to create a Makefile to replicate the action of the Lazarus IDE when it compiles a project. If so I really don't mind if it's makefile.fpc or just plain makefile, I just want some pointers on how to get to it. BTW, I've tried the option to enable the Makefile on the Lazarus options. Doesn't work.

    Read the article

  • Initial Review - Mastering Unreal Technology I: Introduction to Level Design with Unreal Engine 3

    - by Matt Christian
    Recently I purchased 3 large volumes on using the Unreal 3 Engine to create levels and custom games.  This past weekend I cracked the spine of the first and started reading.  Here are my early impressions (I'm ~250 pages into it, with appendices it's about 900). Pros Interestingly, the book starts with an overview of the Unreal engines leading up to Unreal 3 (including Gears of War) and follows with some discussion on planning a mod and what goes into the game development process.  This is nice for an intro to the book and is much preferred rather than a simple chapter detailing what is on the included CD, how to install and setup UnrealEd, etc...  While the chapter on Unreal history and planning can be considered 'fluff', it's much less 'fluffy' than most books provide. I need to mention one thing here that is pretty crucial in the way I'm going to continue reviewing this book.  Most technical books like this are used as a shelf reference; as a thick volume you use for looking up techniques every now and again.  Even so, I prefer reading from cover to cover, including chapters I may already be knowledgable on (I'm sure this is typical for most people).  If there was a chapter on installing UnrealEd (the previously mentioned 'fluff'), I would probably force myself to read it, even though I've installed the game and engine multiple times on different systems. Chapter 3 is where we really get to the introduction piece of UnrealEd, creating your first basic level.  This large chapter details creating two small rooms, adding static meshes, adding lighting, creating and adding particle emitters, creating a door that animates with Unreal Matinee and Kismet, static meshes with physics, and other little additions to make your level look less beginner.  This really is a chapter that overviews the entire rest of the book, as each chapter following details the creation and intermediate usages of Static Meshes, Materials, Lighting, etc... One other very nice part to this book is the way the tutorials are setup.  Each tutorial builds off the previous and all are step-by-step.  If you haven't completed one yet, you can find all the starting files on the CD that comes with the book. Cons While the description of the overview chapter (Chapter 3) is fresh in your mind, let me start the cons by saying this chapter is setup extremely confusing for the noob.  At one point, you end up creating a door mesh and setting it up as a InteropMesh so that it is ready to be animated, only to switch to particles and spend a good portion of time working on a different piece of the level.  Yes, this is actually how I develop my levels (jumping back and forth), though it's very odd for a book to jump out of sequence. The next item might be a positive or a negative depending on your skill level with UnrealEd.  Most of the introduction to the editor layout is found in one of the Appendices instead of before Chapter 3.  For new readers, this might lead to confusion as Appendix A would typically be read between Chapter 2 and 3.  However, this is a positive for those with some experience in UnrealEd as they don't have to force themselves through a 'learn every editor button' chapter.  I'm listing this in the Cons section as the book is 'Introduction to...' and is probably going to be directed toward a lot of very beginner developers. Finally, there's a lack of general description to a lot of the underlying engine and what each piece in UnrealEd is or does.  Sometimes you'll be performing Tutorial after Tutorial with barely a paragraph in between describing ANY of what you've just done.  Tutorial 1.1 Step 6 says to press Button X, so you do.  But why?  This is in part a problem with the structure of the tutorials rather than the content of the book.  Since the tutorials are so focused on a step-by-step (or procedural) description of a process, you learn the process and not why you're doing that.  For example, you might learn how to size a material to a surface, but will only learn what buttons to press and not what each one does. This becomes extremely apparent in the chapter on Static Meshes as most of the chapter is spent in 3D Studio Max.  Since there are books on 3DSM and modelling, the book really only tells you the steps and says to go grab a book on modelling if you're really interested in 3DSM.  Again, I've learned the process to develop my own meshes in 3DSM, but I don't know the why behind the steps. Conclusion So far the book is very good though I would have a hard time recommending it to a complete beginner.  I would suggest anyone looking at this book (obviously including the other 2, more advanced volumes) to pick up a copy of UDK or Unreal 3 (available online or via download services such as Steam) and watch some online tutorials and play with it first.  You'll find plenty of online videos available that were created by the authors and may suit as a better introduction to the editor.

    Read the article

  • SQL Server CE, Visual Studio 2008/2010 RC, and Linq-to-Sql

    - by blu
    I added an .sdf to my project, added a table, added a Linq-to-Sql dmbl, and tried to add the table to the dbml. The result was an error: "The selected object(s) are an unsupported data provider" This happens in both VS 2008 Professional SP1 and 2010 RC Ultimate. I found someone talking about using SQL Metal to generate the file, but I didn't enjoy that 2 years ago, and after a little playing around I recall why. Does anyone know if this is going to be supported in the release version? Should I abandon SQL Server CE and just use SQLite (with DbLinq)? Thanks for any insight.

    Read the article

  • Where is PostSharp.Public 1.5 DLL ?

    - by jfneis
    Fellows, I'm going crazy with looks like a really stupid problem. I'm trying to build a simple example using PostSharp as a log AOP utility. I've not installed PostSharp, and I don't want to, I want to reference the necessaries DLLs, change my .csproj and see everything working. Change the project and add references was kind of easy, byt just after adding the LogAttribute to a method I got two errors: Error 1 'Log4PostSharp.LogAttribute' is not an attribute class C:\Dev\LogWithPostsharp\LogWithPostsharpCmd\Program.cs 17 10 LogWithPostsharpCmd Error 2 The type 'PostSharp.Extensibility.MulticastAttribute' is defined in an assembly that is not referenced. You must add a reference to assembly 'PostSharp.Public, Version=1.5.0.0, Culture=neutral, PublicKeyToken=b13fd38b8f9c99d7'. C:\Dev\LogWithPostsharp\LogWithPostsharpCmd\Program.cs 18 22 LogWithPostsharpCmd The first error really looks like consequence of the second, but here is the deal: the PostSharp.Public.* simply doesn't exist in the downloaded .zip. Is there something that I'm not getting? Thank you in advance. Filipe

    Read the article

  • Repository pattern - Switch out the database and switch in XML files

    - by glasto red
    Repository pattern - Switch out the database and switch in XML files. Hello I have an asp.net MVC 2.0 project and I have followed the Repository pattern. Periodically, I am losing access to the database server so I want to have another mechanism in place (XML files) to continue developing. It is not possible to have a local version of the db unfortunately! I thought this would be relatively easy using the Repository pattern, to switch out the db repositories and switch in XML versions. However, I am having real trouble coming up with a solution. I have tried LinqToXML but then ran into problems trying to return a List of News items as the LinqToXML ToList returns Generic.List Should I be mapping the XElement list over to the News list by hand? It just seems a bit clunky compared to LinqToSQL attributes on the News class and then simply doing a Table.....ToList(); Any direction would be appreciated. Thanks

    Read the article

  • Dialogs (Real ones)

    - by Richard W
    Having tried a number of different solutions I keep coming back to this. I need a Window.ShowDialog, using the ViewModelLocator class as a factory via a UnityContainer. Basically I have a View(and ViewModel) which on a button press on the the view needs to create a dialog (taking a couple of parameters in its constructor) that will process some logic and eventally return a result to the caller (along with the results of all the logic it computed). Maybe I'm wrong for stilll looking at this from a Windows Forms perspective, but I know exactly what I want to do and I want to ideally do it using WPF and MVVM. I'm trying to make this work for a project, and ultimately don't want to have to go back to vanilla WPF in order to make it work.

    Read the article

  • PHP form class

    - by Oli
    I'm used to ASPNET and Django's methods of doing forms: nice object-orientated handlers, where you can specify regexes for validation and do everything in a very simple way. After months living happily without it, I've had to come back to PHP for a project and noticed that everything I used to do with PHP forms (manual output, manual validation, extreme pain) was utter rubbish. Is there a nice, simple and free class that does form generation and validation like it should be done? Clonefish has the right idea, but it's way off on the price tag.

    Read the article

  • Strange jboss console error

    - by c0mrade
    Hello everyone, I'm creating additional module to already multi-module maven project. And for this one I want everything to be like in other modules(meaning dependencies) just to test hello world, then I'll go do some more complex stuff. And it does print hello world as it should when deployed onto jboss server, but I get some strange error on console, had anyone had similar experience? and how can I fix it? Here it is : 15:48:35,789 ERROR [STDERR] log4j:ERROR A "org.jboss.logging.appender.FileAppender" object is not assignable to a "org.apache.log4j.Appender" variable. 15:48:35,789 ERROR [STDERR] log4j:ERROR The class "org.apache.log4j.Appender" was loaded by 15:48:35,790 ERROR [STDERR] log4j:ERROR [BaseClassLoader@9a8d9b{vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/new-module-0.0.1-SNAPSHOT.war/}] whereas object of type 15:48:35,790 ERROR [STDERR] log4j:ERROR "org.jboss.logging.appender.FileAppender" was loaded by [org.jboss.bootstrap.NoAnnotationURLClassLoader@506411]. 15:48:35,790 ERROR [STDERR] log4j:ERROR Could not instantiate appender named "FILE".

    Read the article

  • create a wav file from multiple wav files in delphi

    - by Bayu
    i' ve a problem in doing my final project... i'm having trouble with how to save multiple wav files into 1 wav file.. let's take an example: i have 3 wav files which are the syllables of the word "hospital" : "hos.wav", "pi.wav", and "tal.wav" (sorry if i'm wrong in determining the syllables of the words).. each of those syllable wav files contains utterances of the syllables of the word "hospital" respectively.. my task is to merge those files so that the word hospital could be reproduced from those files. and then to save the merged file to be a new wav file, let say "hospital.wav"..I've done my first task, but not with my second task... does anyone can help me? thx b4..

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • How to automatically add SVN commit messages and revision numbering to java file?

    - by John
    I'm working on an Apache Wicket project in Eclipse with Maven2 -- my SCM is Subversion. I've got Subclipse set up which I use to commit changes to the repository. I've seen several projects with nice headers containing the current revision number and at the bottom of the java source file there's a list of all the changes that have been committed to the file including the comments that were passed. Is there any way of achieving this sort of behaviour automatically? At work I'm using MKS which does this automatically but I am yet to figure out how to achieve this with SVN and Eclipse.

    Read the article

  • Restore VisualSVN server from client copy.

    - by Kevin
    I am running VisualSVN on a windows VM box. The VM crashed and corrupted the image. After restoring an older image (2007) we discovered that our data backup is not functioning properly. Hence I have a bunch of projects (~20) siting on my laptop (client side) and I want to push them back into the VisualSVN Server, which is now empty. I know this can be done by simply adding the project files manually, but this is going to take along time because I don't want to include every file (i.e. complied files). Any suggestions would be greatly appreciated.

    Read the article

  • Excel error HRESULT: 0x800A03EC while trying to get range with cell's name

    - by Teerasej
    I am working with Window Service project. that have to write data to a sheet in Excel file in a sequence times. But sometimes, just sometimes, the service throw out the exception "Exception from HRESULT: 0x800A03EC" while it's trying to get range with cell's name. I have put the code of opening excel sheet, and getting cell here. OS: window server 2003 Office: Microsoft Office 2003 sp2 1: Opening excel sheet m_WorkBook = m_WorkBooks.Open(this.FilePath, 0, false, 5, "", "", true, Excels.XlPlatform.xlWindows, ";", true, false, 0, true, 0, 0); 2: Getting cell to write protected object m_MissingValue = System.Reflection.Missing.Value; Range range = m_WorkSheet.get_Range(cell.CellName, m_MissingValue); // error from this method, and cell name is string.

    Read the article

  • How can I detect touch events on a UIImageView inside a deep hierarchy of UIViews?

    - by jester
    I am developing a simple game on iPhone and I started with a View based application, and through the UIViewController I am loading a custom UIView that contains some more UIViews and some UIImageViews. I want to detect touch events on some of the UIImageViews, but I wasn't able so far. I've created a second project for testing purposes and figured that UIImageViews handle touch events when the hierarchy is like : UIViewController - UIView - UIImageView, but when it's like : UIViewController - UIView - UIView - UIImageView they are not detected. Notes: - userInteractionEnabled is YES in all cases - all UIViews and UIImageViews I mention above are custom subclasses. Can you think of any reason why, when a custom UIImageView gets deeper in the view hierarchy can't receive touch events?

    Read the article

  • war Ant task with needxmlfile="false" still complains

    - by Menelaos Perdikeas
    I have the following war task in my build.xml and even though needxmlfile is set to false, Ant (version 1.8.2) complains when the web.xml file does not exist ("BUILD FAILED ... Deployment descriptor: /home/.../web/WEB-INF/web.xml does not exist") What am I missing? <target name="war" depends="build"> <mkdir dir="${build.dir}"/> <war needxmlfile="false" basedir="${webroot.dir}" warfile="${build.dir}/${project.distname}.war" webxml="${webinf.dir}/web.xml"> <exclude name="WEB-INF/${build.dir}/**"/> <exclude name="WEB-INF/src/**"/> <exclude name="WEB-INF/web.xml"/> </war> </target>

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

< Previous Page | 931 932 933 934 935 936 937 938 939 940 941 942  | Next Page >