Search Results

Search found 8467 results on 339 pages for 'map layers'.

Page 214/339 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Review: A Quick Look at Reflector

    - by James Michael Hare
    I, like many, was disappointed when I heard that Reflector 7 was not free, and perhaps that’s why I waited so long to try it and just kept using my version 6 (which continues to be free).  But though I resisted for so long, I longed for the better features that were being developed, and began to wonder if I should upgrade.  Thus, I began to look into the features being offered in Reflector 7.5 to see what was new. Multiple Editions Reflector 7.5 comes in three flavors, each building on the features of the previous version: Standard – Contains just the Standalone application ($70) VS – Same as Standard but adds Reflector Object Browser for Visual Studio ($130) VSPro – Same as VS but adds ability to set breakpoints and step into decompiled code ($190) So let’s examine each of these features. The Standalone Application (Standard, VS, VSPro editions) Popping open Reflector 7.5 and looking at the GUI, we see much of the same familiar features, with a few new ones as well: Most notably, the disassembler window now has a tabbed window with navigation buttons.  This makes it much easier to back out of a deep-dive into many layers of decompiled code back to a previous point. Also, there is now an analyzer which can be used to determine dependencies for a given method, property, type, etc. For example, if we select System.Net.Sockets.TcpClient and hit the Analyze button, we’d see a window with the following nodes we could expand: This gives us the ability to see what a given type uses, what uses it, who exposes it, and who instantiates it. Now obviously, for low-level types (like DateTime) this list would be enormous, but this can give a lot of information on how a given type is connected to the larger code ecosystem. One of the other things I like about using Reflector 7.5 is that it does a much better job of displaying iterator blocks than Reflector 6 did. For example, if you were to take a look at the Enumerable.Cast() extension method in System.Linq, and dive into the CastIterator in Reflector 6, you’d see this: But now, in Reflector 7.5, we see the iterator logic much more clearly: This is a big improvement in the quality of their code disassembler and for me was one of the main reasons I decided to take the plunge and get version 7.5. The Reflector Object Browser (VS, VSPro editions) If you have the .NET Reflector VS or VSPro editions, you’ll find you have in Visual Studio a Reflector Object Browser window available where you can select and decompile any assembly right in Visual Studio. For example, if you want to take a peek at how System.Collections.Generic.List<T> works, you can either select List<T> in the Reflector Object Browser, or even simpler just select a usage of it in your code and CTRL + Click to dive in. – And it takes you right to a source window with the decompiled source: Setting Breakpoints and Stepping Into Decompiled Code (VSPro) If you have the VSPro edition, in addition to all the things said above, you also get the additional ability to set breakpoints in this decompiled code and step through it as if it were your own code: This can be a handy feature when you need to see why your code’s use of a BCL or other third-party library isn’t working as you expect. Summary Yes, Reflector is no longer free, and yes, that’s a bit of a bummer. But it always was and still is a very fine tool. If you still have Reflector 6, you aren’t forced to upgrade any longer, but getting the nicer disassembler (especially for iterator blocks) and the handy VS integration is worth at least considering upgrading for.  So I leave it up to you, these are some of the features of Reflector 7.5, what’s your thoughts? Technorati Tags: .NET,Reflector

    Read the article

  • Oracle WebCenter: Common User Experience Architecture

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  In previous weeks we've provided an overview of Oracle WebCenter and discussed some of the other key goals and this week, we'll focus on how the new release of Oracle WebCenter delivers a Common User Experience Architecture.When Oracle talks about a Common User Experience Architecture, it really focuses on a core set of areas.  First, the way that information is accessed needs to be consistent and extensible so that as requirements change, the applications don't need to be rewritten for every change. Second, this information access layer needs to be securely accessible to any application, site, or any other channel that needs to leverage this information.  Third, there needs to be a consistent presentation layout, Oracle calls it a UI shell, so that all resources can fit together in a useable, productive way.  Fourth, there needs to be a common set of design patterns for how different menus, features, and services fit into this UI Shell for broad and productive usability.  Fifth, there needs to be a set of design patterns for the individual services that plug into this UI shell so that end users can move from one module of the application to another without new learning.  Finally, all of these layers need to be customizable in an easy way that insulates IT from patching and upgrading problems and allows the business owners the agility to quickly change with the market conditions.As Oracle has already announced, we will release our next generation of enterprise applications called Oracle Fusion Applications.  We have thousands of developers building these applications that all had different programming tool experience and UI design experience.  We've educated over 6,000 developers building Oracle Fusion Applications to leverage these Common User Experience Architecture patterns to speed their learning curve of the new Java standards as well as SOA principles to deliver a revolutionary new set of applications.  You could imagine the big challenge with getting all these developers with different backgrounds and different UI design skills to deliver a completely integrated application user experience.  This is why Oracle invested heavily in designing this Common User Experience Architecture, based on Oracle WebCenter and the Oracle Application Development Framework (ADF).  It pulls together the best practices and design patterns that Oracle development required in order to bring Fusion Applications to market and Oracle WebCenter is the user experience layer that all of this is surfaced through.  In this way, customers can quickly brand a deployment for new partnerships without having to redevelop a new site.  Or they can quickly add new options to the UI Shell to enable their line of business managers to quickly adapt to a new competitive product.  And with the core integration of the activities to produce a Business Activity Stream, customers are able to stay on top of all their key business actions when they happen as they happen and more importantly, the system can recommend actions or resources to help act on these activities.And we've authored this whole set of design patterns for Oracle development to take advantage of in delivering Fusion Applications.  We're also applying these design patterns to our existing eBusiness Suite, Peoplesoft, Siebel, and JD Edwards applications so that they can tie in the exact same way that Fusion Applications has been brought together.  This will provide customers with a complete Common User Experience Architecture for their entire ecosystem of applications within their enterprise whether they are from Oracle, another vender, or custom built applications. And this is all provided in the new release of Oracle WebCenter.  These design patterns cover elements around delivering a complete, aggregated menu of all the capabilities that their role allows independent of which application they are trying to access.   It means that as they move from one application to another, they will have a consistent user experience.  And if they are using an Oracle application, any customizations that are made to the application are preserved and managed through upgrades and patches.Be sure to check back this week as we share more information and resources on Oracle's Common User Experience Architecture.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Low level programming - what's in it for me?

    - by back2dos
    For years I have considered digging into what I consider "low level" languages. For me this means C and assembly. However I had no time for this yet, nor has it EVER been neccessary. Now because I don't see any neccessity arising, I feel like I should either just schedule some point in time when I will study the subject or drop the plan forever. My Position For the past 4 years I have focused on "web technologies", which may change, and I am an application developer, which is unlikely to change. In application development, I think usability is the most important thing. You write applications to be "consumed" by users. The more usable those applications are, the more value you have produced. In order to achieve good usability, I believe the following things are viable Good design: Well-thought-out features accessible through a well-thought-out user interface. Correctness: The best design isn't worth anything, if not implemented correctly. Flexibility: An application A should constantly evolve, so that its users need not switch to a different application B, that has new features, that A could implement. Applications addressing the same problem should not differ in features but in philosophy. Performance: Performance contributes to a good user experience. An application is ideally always responsive and performs its tasks reasonably fast (based on their frequency). The value of performance optimization beyond the point where it is noticeable by the user is questionable. I think low level programming is not going to help me with that, except for performance. But writing a whole app in a low level language for the sake of performance is premature optimization to me. My Question What could low level programming teach me, what other languages wouldn't teach me? Am I missing something, or is it just a skill, that is of very little use for application development? Please understand, that I am not questioning the value of C and assembly. It's just that in my everyday life, I am quite happy that all the intricacies of that world are abstracted away and managed for me (mostly by layers written in C/C++ and assembly themselves). I just don't see any concepts, that could be new to me, only details I would have to stuff my head with. So what's in it for me? My Conclusion Thanks to everyone for their answers. I must say, nobody really surprised me, but at least now I am quite sure I will drop this area of interest until any need for it arises. To my understanding, writing assembly these days for processors as they are in use in today's CPUs is not only unneccesarily complicated, but risks to result in poorer runtime performance than a C counterpart. Optimizing by hand is nearly impossible due to OOE, while you do not get all kinds of optimizations a compiler can do automatically. Also, the code is either portable, because it uses a small subset of available commands, or it is optimized, but then it probably works on one architecture only. Writing C is not nearly as neccessary anymore, as it was in the past. If I were to write an application in C, I would just as much use tested and established libraries and frameworks, that would spare me implementing string copy routines, sorting algorithms and other kind of stuff serving as exercise at university. My own code would execute faster at the cost of type safety. I am neither keen on reeinventing the wheel in the course of normal app development, nor trying to debug by looking at core dumps :D I am currently experimenting with languages and interpreters, so if there is anything I would like to publish, I suppose I'd port a working concept to C, although C++ might just as well do the trick. Again, thanks to everyone for your answers and your insight.

    Read the article

  • Silverlight Cream for February 06, 2011 -- #1042

    - by Dave Campbell
    In this Issue: Mike Taulty, Timmy Kokke, Laurent Bugnion, Arik Poznanski, Deyan Ginev, Deborah Kurata(-2-), Johnny Tordgeman, Roy Dallal, Jaime Rodriguez, Samuel Jack(-2-), James Ashley. Above the Fold: Silverlight: "Customizing Silverlight properties for Visual Designers" Timmy Kokke WP7: "Back button press when using webbrowser control in WP7" Jaime Rodriguez Expression Blend: "Blend Bits 21–Importing from Photoshop & Illustrator…" Mike Taulty From SilverlightCream.com: Blend Bits 21–Importing from Photoshop & Illustrator… Mike Taulty is up to 21 episodes on his Blend Bits sequence now, and this one is about using Blend's import capability, such as a .psd file with all the layers intact. Customizing Silverlight properties for Visual Designers Timmy Kokke has part 1 of 2 parts on making your Silverlight control properties in design surfaces such as Visual Studio designer or Expression Blend. An error when installing MVVM Light templates for VS10 Express Laurent Bugnion has released a new version of MVVMLight that resolves a problem with VS2010 Express version of the templates... no problem with anything else. Reading RSS items on Windows Phone 7 Arik Poznanski has a post up about reading RSS on a WP7, but better yet, he also has code for a helper class that you can grab, plus explanation of wiring it up. Integrating your Windows Phone unit tests with MSBuild #4: The WP7 Unit Test Application Deyan Ginev has a post up about Telerik's WP7 test app that outputs test results in XML from the emulator so they can be integrated with the MSBuild log. Accessing Data in a Silverlight Application: EF I apprently missed this post by Deborah Kurata last week on bringing data into your Silverlight app via Entity Frameworks... good detailed tutorial in VB and C#. Updating Data in a Silverlight Application: EF In Deborah Kurata's latest post, she is continuing with Entity Frameworks by demonstrating updating to the database... full source code will be produced in a later post. Fun with Silverlight and SharePoint 2010 Ribbon Control - Part 2 - An In Depth Look At The Ribbon Control Johnny Tordgeman has Part 2 of his Silverlight and Sharepoint 2010 Ribbon up... taking a deep-dive into the ribbon... great explanation of the attributes, code included. Geographic Coordinates Systems Roy Dallal has some Geo code up that's not necessarily Silverlight, but very cool if you're doing any GIS programming... ya gotta know the coordinate systems! Back button press when using webbrowser control in WP7 Jaime Rodriguez has a post up discussing the much-lamented back-button action in the certification requirements and how to deal with that in a web browser app. Multiplayer-enabling my Windows Phone 7 game: Day 1 Samuel Jack challenged himself to build a WP7 game in 3 days... now he's challenging himself to make it multiplayer in 3 days... this first hour-to-hour post is research of networking and an azure server-side solution. Multiplayer-enabling my Windows Phone 7 game: Day 2–Building a UI with XPF Day 2 for Samuel Jack getting the multiplayer portion of his game working in 3 days.. this day involves getting up-to-speed with XPF. How to Hotwire your WP7 Phone Battery Did you realize if you run your WP7 battery completely down that you can't charge it? James Ashley reports that circumstance, and how he resolved it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • Launching Ops Center 12c

    - by user12601629
    Oracle Enterprise Manager Ops Center 12c is most ambitious version of the Ops Center tooling that we've ever released. I think that make it appropriate that we launched it in grand style! When it became clear we were going to be complete with the 12c final release about this time of year, the marketing team proposed that we roll the launch of 12c into Oracle OpenWorld Tokyo.  I thought that sounded like a fine idea!  You see, I have always loved Japan.  I even studied a bit of Japanese language back in school. OpenWorld Tokyo was an outstanding even this year.  It was held in Roppongi, one of the most stylish districts in Tokyo. And, to make things even better, the Sakura (cherry blossoms) were blooming.  If you've never been in Japan for cherry blossom season, it's a must see!  Here are a couple of pics for you. Here is a picture from Roppongi, near the conference.  Here's a picture near the Imperial Palace.  A couple of friends from the local sales team took me here before my flight out. So, now back to the product launch! We choose to launch the product in John Fowler's "Engineered Systems" keynote address.  It made perfect sense because of the close ties of Ops Center to the Systems portfolio of products.  It was a packed house for the keynote.  Here's a picture I took just before we started -- there were also hundreds more people in "overflow" rooms in other parts of the venue. Here's a picture of me on stage during the launch. While there are countless new features in Ops Center 12c that customers will love, I had to limit myself to discussing just three. Mission Critical Clouds Solaris 11 Engineered Systems So, what does Mission Critical Cloud mean?  It means we've expanded EM's cloud capabilities in a couple of key areas. First, we've expanded the "self service provisioning" capabilities we have to include SPARC -- not just x86.  Now you can build clouds of Solaris Zones with ease!  Second, we've much more deeply integrated high-end storage and network management into the cloud layers.  These may our IaaS story is now much more powerful! For Solaris 11, we didn't simply port our monitoring agent to S11.  That would have been easy, but also boring! We support S11 deeply.  Full access to the power of the IPS packaging system, the new virtualized networking stack, new Zones features, the Auto Install framework.  If you're ready to try Solaris 11 then Ops Center is ready for you. Last is on the area of Engineered Systems.  These combinations of hardware and software are fast and powerful. However, we're also on a mission to make them ever easier to manage.  We've made major strides with Ops Center 12c. Manage these systems as racks, not individual components.  The new capabilities for the new engineered systems like Exalogic and SPARC SuperCluster and striking. You can read more here: Oracle Unveils Oracle Enterprise Manager Ops Center 12c So, I'll wrap this up with one final bit of fun. One of my friends from the Oracle marketing department found a super cool place to get dinner.  It's a restaurant called Gonpachi. It turns out this is the place that inspired the scene in the Quentin Taratino movie Kill Bill where Uma Thurman fights 88 Ninjas.  Here is a picture I snapped while we were there. It was surely a good time. Check it out next time you're in Tokyo.

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • Why won't my code work in Ubuntu Server 11.10? Is it because of gd library?

    - by Derrick
    I get this error when running the following code: No such file found at "widgets/104-text.png" I know that the code works because it works on my other non-ubuntu server. I do not know if it is gd library or what. I tried both the bundled version and the non-bundled and both do not make this code work. $con = mysql_connect("localhost","user","abc123"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("satabase_name", $con); $productid2 = $this->product->id; $thename = mysql_query("SELECT * FROM pshop_product_lang WHERE id_product = '$productid2' LIMIT 1"); $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png'); $getimg110 = mysql_query("SELECT * FROM pshop_image WHERE id_product = '$productid2'"); $gotimg110 = mysql_fetch_array($getimg110); $slash110 = addcslashes($gotimg110[id_image], '\0..\999999999999999999999'); $str110 = str_replace('\\', '/', $slash110); $newimg110 = '<img src="img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg" />'; include("conf.inc.php"); include('ImageWorkshop.php'); // Initialization of layer you need $pinguLayer = new ImageWorkshop(array( 'imageFromPath' => 'widgets/background.png', )); $pinguLayer2 = new ImageWorkshop(array( 'imageFromPath' => 'img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg', )); $pinguLayer3 = new ImageWorkshop(array( 'imageFromPath' => 'widgets/' . $productid2 . '-text.png', )); // resize pingu layer $thumbWidth2 = 150; // px $thumbHeight2 = 150; $thumbWidth = 400; // px $thumbHeight = 200; $pinguLayer2->resizeInPixel($thumbWidth2, $thumbHeight2); $pinguLayer->resizeInPixel($thumbWidth, $thumbHeight); // Add 2 layers on pingu layer $pinguLayer->addLayerOnTop($pinguLayer2, null, null, 'LM'); $pinguLayer->addLayerOnTop($pinguLayer3, 70, 25, 'MM'); // Saving the result in a folder $pinguLayer->save("widgets/", $productid2 . ".gif", true, null, 95); The file path is correct, however this part of the code is not creating the image as it is supposed to: $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png');

    Read the article

  • How to populate a generic list of objects in C# from SQL database

    - by developr
    I am just learning ASP.NET c# and trying to incorporate best practices into my applications. Everything that I read says to layer my applications into DAL, BLL, UI, etc based on separation of concerns. Instead of passing datatables around, I am thinking about using custom objects so that I am loosely coupled to my data layer and can take advantage of intellisense in VS. I assume these objects would be considered DTOs? First, where do these objects reside in my layers? BLL, DAL, other? Second, when populating from SQL, should I loop through a data reader to populate the list or first fill a data table, then loop through the table to populate the list? I know you should close the database connection as soon as possible, but it seems like even more overhead to populate the data table and then loop through that for the list. Third, everything I see these days says use Linq2SQL. I am planning to learn Linq2SQL, but at this time I am working with a legacy database that doesn't have foreign keys setup and I do not have the ability to fix it atm. Also, I want to learn more about c# before I start getting into ORM solutions like nHibernate. At the same time I don't want to type out all the connection and SQL plumbing for every query. Is it ok to use the Enterprise DAAB for now?

    Read the article

  • Matplotlib pick event order for overlapping artists

    - by Ajean
    I'm hitting a very strange issue with matplotlib pick events. I have two artists that are both pickable and are non-overlapping to begin with ("holes" and "pegs"). When I pick one of them, during the event handling I move the other one to where I just clicked (moving a "peg" into the "hole"). Then, without doing anything else, a pick event from the moved artist (the peg) is generated even though it wasn't there when the first event was generated. My only explanation for it is that somehow the event manager is still moving through artist layers when the event is processed, and therefore hits the second artist after it is moved under the cursor. So then my question is - how do pick events (or any events for that matter) iterate through overlapping artists on the canvas, and is there a way to control it? I think I would get my desired behavior if it moved from the top down always (rather than bottom up or randomly). I haven't been able to find sufficient enough documentation, and a lengthy search on SO has not revealed this exact issue. Below is a working example that illustrates the problem, with PathCollections from scatter as pegs and holes: import matplotlib.pyplot as plt import sys class peg_tester(): def __init__(self): self.fig = plt.figure(figsize=(3,1)) self.ax = self.fig.add_axes([0,0,1,1]) self.ax.set_xlim([-0.5,2.5]) self.ax.set_ylim([-0.25,0.25]) self.ax.text(-0.4, 0.15, 'One click on the hole, and I get 2 events not 1', fontsize=8) self.holes = self.ax.scatter([1], [0], color='black', picker=0) self.pegs = self.ax.scatter([0], [0], s=100, facecolor='#dd8800', edgecolor='black', picker=0) self.fig.canvas.mpl_connect('pick_event', self.handler) plt.show() def handler(self, event): if event.artist is self.holes: # If I get a hole event, then move a peg (to that hole) ... # but then I get a peg event also with no extra clicks! offs = self.pegs.get_offsets() offs[0,:] = [1,0] # Moves left peg to the middle self.pegs.set_offsets(offs) self.fig.canvas.draw() print 'picked a hole, moving left peg to center' elif event.artist is self.pegs: print 'picked a peg' sys.stdout.flush() # Necessary when in ipython qtconsole if __name__ == "__main__": pt = peg_tester() I have tried setting the zorder to make the pegs always above the holes, but that doesn't change how the pick events are generated, and particularly this funny phantom event.

    Read the article

  • Application/Server dependency mapping

    - by David Stratton
    I'm just curious as to whether such as tool exists (free, open source, or commercial but for a reasonable price) before I build it myself. We're looking for a simple solution to simplify taking web apps online and offline when a server is undergoing maintenance. The idea is that we be able to mark a server as unavailable, and then mark all dependent (direct and indirect) as offline. Our first proof-of-concept is running, and we created an aspx page that lists various applications that have an App_Offline.html file with a friendly "Down for Maintenance" message in a GridView. In the GridView, each app has a LinkButton that, when clicked, either renames the App_Offline.htm to App_Offline.html or vice-versa to take the app online and offline. The next step is to set up all of our dependencies. For example, our store locater would be dependent on our web services, which in turn are dependent on our SQL Server. (that's a simple example. We can easily have several layers, or one app dependent on multiple servers, etc.) In this example, if the SQL server goes down, we would need to drill down recursively to find all apps that depend on it, and then turn them off and on by renaming the App_Offline file appropriately. I realize this will be relatively simple to build, but could be complex to manage. I'm sure we're not the first team to think of this concept, and I'm wondering if there are any open source tools, or if any of you have done something similar and can help us avoid pitfalls. Edit - Update I found the category of software I'm looking for. it's called CMDB - (Configuration Management Database), and it's generally more of a Network Admin type tool than a developer tool. I found some open source products in this category, but none written in .NET. I had considered moving this question to ServerFault.com when I realized I was looking for a netowrk Admin type tool, but since I'm looking for code and a modifiable solution I'll keep the question here.

    Read the article

  • CALayer won't display

    - by Paul from Boston
    I'm trying to learn how to use CALayers for a project and am having trouble getting sublayers to display. I created a vanilla View-based iPhone app in XCode for these tests. The only real code is in the ViewController which sets up the layers and their delegates. There is a delegate, DelegateMainView, for the viewController's view layer and a second different one, DelegateStripeLayer, for an additional layer. The ViewController code is all in awakeFromNib, - (void)awakeFromNib { DelegateMainView *oknDelegate = [[DelegateMainView alloc] init]; self.view.layer.delegate = oknDelegate; CALayer *newLayer = [CALayer layer]; DelegateStripeLayer *sldDelegate = [[DelegateStripeLayer alloc] init]; newLayer.delegate = sldDelegate; [self.view.layer addSublayer:newLayer]; [newLayer setNeedsDisplay]; [self.view.layer setNeedsDisplay]; } The two different delegates are simply wrappers for the CALayer delegate method, drawLayer:inContext:, i.e., - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context { CGRect bounds = CGContextGetClipBoundingBox(context); ... do some stuff here ... CGContextStrokePath(context); } each a bit different. The layer, view.layer, is drawn properly but newLayer is never drawn. If I put breakpoints in the two delegates, the program stops in DelegateMainView but never reaches DelegateStripeLayer. What am I missing here? Thanks.

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Mimic CALayer shadow properties found in iPhone OS 3.2 for OS 3.1

    - by niblha
    The CALayer shadow properties like shadowOffset, shadowRadius, shadowColor are not available in iPhone OS versions below 3.2 and I'm wondering how I could mimic that functionality for use with 3.1 and below. I want to use this to be able to add drop shadows to UIViews in a clean way so that the shadows are drawn at layer level somehow, and not by drawing it in a view's -(void)drawRect:(CGRect)rect method which requires to shrink the actual views frame to accomodate for the shadow. (This shrinking approach have been proposed in the other UIView drop shadow related questions I found here on SO). I was thinking a layered approach would be cleaner. For example I tried creating subclassing CALayer to which I added a separate shadow layer as a sublayer, but then that would be drawn on top of whatever was draw in the drawRect: method of the UIView that had the main layer as backing layer. I've also tried implementing the subclass CALayer's drawInContext: something like this, - (void)drawInContext:(CGContextRef)ctx { // code to draw shadow for a frame the size of the layer's frame [super drawInContext:ctx]; } But then the shadow is still clipped to the current clipping bounding box of the context, which seems to be the layers own frame. I also had some idea of redirecting the drawing of the main layer to a sublayer, which would be placed above another sublayer which had the shadow drawn onto it. Then I would probably get rid of the clipping and the shadow would be farthest away. But I couldn't really wrap my head around how I would do that, and it doesn't really feel like a clean approach. Any ideas on how to go about this? Just to make clear how my UIView drop shadow related question is different from the other ones I found here on SO; I do not want to shrink the actual drawing frame of a UIView to accomodate for a shadow. I want it to somehow be on a separate layer in the background, whithout beeing clipped.

    Read the article

  • NHibernate Generators

    - by Dan
    What is the best tool for generating Entity Class and/or hbm files and/or sql script for NHibernate. This list below is from http://www.hibernate.org/365.html, which is the best any why? Moregen Free, Open Source (GPL) O/R Generator that can merge into existing Visual Studio Projects. Also merges changes to generated classes. NConstruct Lite Free tool for generating NHibernate O/R mapping source code. Different databases support (Microsoft SQL Server, Oracle, Access). GENNIT NHibernate Code Generator Free/Commercial Web 2.0 code generation of NHibernate code using WYSIWYG online UML designer. GenWise Studio with NHibernate Template Commercial product; Imports your existing database and generates all XML and Classes, including factories. It can also generate a asp.net web-application for your NHibernate BO-Layer automatically. HQL Analyzer and hbm.xml GUI Editor ObjectMapper by Mats Helander is a mapping GUI with NHibernate support MyGeneration is a template-based code generator GUI. Its template library includes templates for generating mapping files and classes from a database. AndroMDA is an open-source code generation framework that uses Model Driven Architecture (MDA) to transform UML models into deployable components. It supports generation of data access layers that use NHibernate as their persistence framework. CodeSmith Template for NH NHibernate Helper Kit is a VS2005 add-in to generate classes and mapping files. NConstruct - Intelligent Software Factory Commercial product; Full .NET C# source code generation for all tiers of the information system trough simple wizard procedure. O/R mapping based on NHibernate. For both WinForms and ASP.NET 2.0.

    Read the article

  • C# using namespace directive in nested namespaces

    - by MoSlo
    Right, I've usually used 'using' directives as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq } i've recently seen examples of such as using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq namespace DataLibrary { using System.Data; //Data access layers and whatnot } } Granted, i understand that i can put USING inside of my namespace declaration. Such a thing makes sense to me if your namespaces are in the same root (they organized). System; namespace 1 {} namespace 2 { System.data; } But what of nested namespaces? Personally, I would leave all USING declarations at the top where you can find them easily. Instead, it looks like they're being spread all over the source file. Is there benefit to the USING directives being used this way in nested namespaces? Such as memory management or the JIT compiler?

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • Open Source Web Frameworks : Security

    - by trappedIntoCode
    How secure are popular open source web frameworks? I am particularly interested in popular frameworks like Rails and DJango. If I am building a site which is going to do heavy e-commerce, is it Ok to use frameworks like DJango and Satchmo? Is security compromised because their open architecture ? I know being OS does not mean being down right open to hackers, Linux uses superb authentication mechanism, but web is a different game. What can be done in this regard? UPDATE: Thanks for answers guys. I understand that I will have to find a suitable hosting service for a secure e-commerce application and that additional layers of security will be needed. I understand that Django and Rails have been designed keeping security aspects in mind, the most common form attacks like XSS, Injections etc. (Django book has a ch on Security) I was expecting comments from security Gurus. If you are a security Guru, would you recommend an important site, which is likely going to be popular, to be built on DJango or Rails?

    Read the article

  • Why is "Fixup" needed for Persistence Ignorant POCO's in EF 4?

    - by Eric J.
    One of the much-anticipated features of Entity Framework 4 is the ability to use POCO (Plain Old CLR Objects) in a Persistence Ignorant manner (i.e. they don't "know" that they are being persisted with Entity Framework vs. some other mechanism). I'm trying to wrap my head around why it's necessary to perform association fixups and use FixupCollection in my "plain" business object. That requirement seems to imply that the business object can't be completely ignorant of the persistence mechanism after all (in fact the word "fixup" sounds like something needs to be fixed/altered to work with the chosen persistence mechanism). Specifically I'm referring to the Association Fixup region that's generated by the ADO.NET POCO Entity Generator, e.g.: #region Association Fixup private void FixupImportFile(ImportFile previousValue) { if (previousValue != null && previousValue.Participants.Contains(this)) { previousValue.Participants.Remove(this); } if (ImportFile != null) { if (!ImportFile.Participants.Contains(this)) { ImportFile.Participants.Add(this); } if (ImportFileId != ImportFile.Id) { ImportFileId = ImportFile.Id; } } } #endregion as well as the use of FixupCollection. Other common persistence-ignorant ORMs don't have similar restrictions. Is this due to fundamental design decisions in EF? Is some level of non-ignorance here to stay even in later versions of EF? Is there a clever way to hide this persistence dependency from the POCO developer? How does this work out in practice, end-to-end? For example, I understand support was only recently added for ObservableCollection (which is needed for Silverlight and WPF). Are there gotchas in other software layers from the design requirements of EF-compatible POCO objects?

    Read the article

  • How do I keep a CALayer, sublayer of a CATiledLayer, from changing it's scale after a zoom ?

    - by David
    I have a CATiledLayer that is used to display a PDF page (this CATiledLayer is the layer type of my UIView which is a subview of a UIScrollView). I want to add overlay markers on this page. So I add a sublayer to my CATiledLayer. This sublayer again hosts the different marker's layers and acts as a grouping layer. So graphically, I have: (keep in mind that I have multiple markers which are CALayers also, this is ascii art after all) pdf page (CATiledLayer) ---------------------- | CALayer | | +---------+ | | | +----+ | | | | |mker| | | | | +----+ | | | +---------+ | | | ---------------------- I have set up the canonical drawLayer:inContext: in my view for drawing the pdf. When I zoom to have more detail, the pdf gets rendered correctly, but the markers get scaled. No matter what I do to the bounds of the CALayer, my markers always become bigger and appear jagged. I would like to have the markers always the same size, as when they were initialized and first shown when the view was drawn. Is this possible ? or am I using a wrong approach ? Should I do special drawing for my contained CALayer in the drawLAyer:inContext: message ? As you see, there are things that I am missing to resolve my problem. Thank you for any help you provide.

    Read the article

  • How to serve tiff WMS imagery through GeoServer

    - by mikem419
    Ok so I am new to the GeoServer/database world. I am a student intern and I have been given the task of setting up a WMS using GeoServer. I have never done any database work before this so bare with me if my questions leave out important information. I am using GeoServer 2.0.1 in standalone mode (downloaded using Jetty) with PostgreSQL 8.4 installed. I went through nyc_roads and nyc_buildings install demo in the GeoServer documentation but I still do not understand how I should go about serving up some test images. I noticed that the nyc_roads setup included a .sql file that was responsible for setting up the nyc_buildings database. I do not know how/where this file was generated. Our test images are .tiff and .jpeg. I have successfully been able to do a WMS call on the local GeoServer machine, and have opened the included demo imagery. I now wish to add these .tiff and .jpeg images to GeoServer and access them through WMS. I have tried copying the images to the GeoServer data directory,adding a new data store, and layers, but I always receive an error regarding the "input stream." Again I am very sorry if I am leaving out allot of vital information, this is as much as I know. thanks!

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >