Search Results

Search found 1451 results on 59 pages for 'theory'.

Page 38/59 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Can a Printer Print White?

    - by Jason Fitzpatrick
    The vast majority of the time we all print on white media: white paper, white cardstock, and other neutral white surfaces. But what about printing white? Can modern printers print white and if not, why not? Read on as we explore color theory, printer design choices, and why white is the foundation of the printing process. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Image by Coiote O.; available as wallpaper here. The Question SuperUser reader Curious_Kid is well, curious, about printers. He writes: I was reading about different color models, when this question hit my mind. Can the CMYK color model generate white color? Printers use CMYK color mode. What will happen if I try to print a white colored image (rabbit) on a black paper with my printer? Will I get any image on the paper? Does the CMYK color model have room for white? The Answer SuperUser contributor Darth Android offers some insight into the CMYK process: You will not get anything on the paper with a basic CMYK inkjet or laser printer. The CMYK color mixing is subtractive, meaning that it requires the base that is being colored to have all colors (i.e., White) So that it can create color variation through subtraction: White - Cyan - Yellow = Green White - Yellow - Magenta = Red White - Cyan - Magenta = Blue White is represented as 0 cyan, 0 yellow, 0 magenta, and 0 black – effectively, 0 ink for a printer that simply has those four cartridges. This works great when you have white media, as “printing no ink” simply leaves the white exposed, but as you can imagine, this doesn’t work for non-white media. If you don’t have a base color to subtract from (i.e., Black), then it doesn’t matter what you subtract from it, you still have the color Black. [But], as others are pointing out, there are special printers which can operate in the CMYW color space, or otherwise have a white ink or toner. These can be used to print light colors on top of dark or otherwise non-white media. You might also find my answer to a different question about color spaces helpful or informative. Given that the majority of printer media in the world is white and printing pure white on non-white colors is a specialty process, it’s no surprise that home and (most) commercial printers alike have no provision for it. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Contests – Solution – Identify the Database Celebrity

    - by Pinal Dave
    Last week we were running contest Identify the Database Celebrity and we had received a fantastic response to the contest. Thank you to the kind folks at NuoDB as they had offered two USD 100 Amazon Gift Cards to the winners of the contest. We had also additional contest that users have to download and install NuoDB and identified the sample database. You can read about the contest over here. Here is the answer to the questions which we had asked earlier in the contest. Part 1: Identify Database Celebrity Personality 1 – Edgar Frank “Ted” Codd (August 19, 1923 – April 18, 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement. (Wki) Personality 2 – James Nicholas “Jim” Gray (born January 12, 1944; lost at sea January 28, 2007; declared deceased May 16, 2012) was an American computer scientist who received the Turing Award in 1998 “for seminal contributions to database and transaction processing research and technical leadership in system implementation.” (Wiki) Personality 3 – Jim Starkey (born January 6, 1949 in Illinois) is a database architect responsible for developing InterBase, the first relational database to support multi-versioning, the blob column type, type event alerts, arrays and triggers. Starkey is the founder of several companies, including the web application development and database tool company Netfrastructure and NuoDB. (Wiki) Part 2: Identify NuoDB Samples Database Names In this part of the contest one has to Download NuoDB and install the sample database Hockey. Hockey is sample database and contains few tables. Users have to install sample database and inform the name of the sample databases. Here is the valid answer. HOCKEY PLAYERS SCORING TEAM Once again, it was indeed fun to run this contest. I have received great feedback about it and lots of people wants me to run similar contest in future. I promise to run similar interesting contests in the near future. Winners Within next two days, we will let winners send emails. Winners will have to confirm their email address and NuoDB team will send them directly Amazon Cards. Once again it was indeed fun to run this contest. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Economic modelling - Resources for valuing goods

    - by Rushyo
    tl;dr: What economic/computer science books would you suggest for learning about economic valuation of goods and simulations thereof? I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling?

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • Why wearing Jeans is considered unprofessional?

    - by Gopinath
    When I started my career 9 years ago I use to wear casual wear to office – Jeans & T-Shirts all the 5 days. The environment at workplace during those days encouraged me to be casual and many of my colleagues use to come in Jeans. We just started our career those days it was perfectly fine to be in casual. As I grow up in the ladder, I started feeling the discomfort of wearing Jeans at work. During clients visits, senior managers meetings and consultations I was an odd man in the crowd as the rest of them are in formals. In order to be one among the professionals I’m forced change my dressing style and start wearing formals. But  the question of “Why wearing jeans to workplace is considered as unprofessional?” use in linger in my mind till today. I got the answer to my question from a discussion thread on Quora When they were invented, jeans were associated with blue-collar work. They were meant to get muddy and gross and take lots of abuse without falling apart, even if you wore the same pair every day. The people who bought them were the ones whose lives required durable clothing. And another commenter says… A professional image is critical to cementing business relationships, and part of that is, for right or wrong, how you dress. Jeans are typically associated with "kicking back", relaxation, leisure, informality,  and even a slightly rebellious flavor. The style and condition of the jeans are a consideration, as we often wear jeans into advanced states of being worn down, with tearing, etc.. that we generally do not do with other clothing items. I agree with this theory even though it may be centuries old. If you want to look like a professional and treated like a professional it’s better to be dress up in formals. These days I make a point to be in formals at workplace. Not everyone is Steve Jobs to wear a Jean & Turtle Neck T-shirt  right? CC Image credit flickr/exey

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • Action delegate in C#

    - by Jalpesh P. Vadgama
    In last few posts about I have written lots of things about delegates and this post is also part of that series. In this post we are going to learn about Action delegates in C#.  Following is a list of post related to delegates. Delegates in C#. Multicast Delegates in C#. Func Delegates in C#. Action Delegates in c#: As per MSDN action delegates used to pass a method as parameter without explicitly declaring custom delegates. Action Delegates are used to encapsulate method that does not have return value. C# 4.0 Action delegates have following different variants like following. It can take up to 16 parameters. Action – It will be no parameter and does not return any value. Action(T) Action(T1,T2) Action(T1,T2,T3) Action(T1,T2,T3,T4) Action(T1,T2,T3,T4,T5) Action(T1,T2,T3,T4,T5,T6) Action(T1,T2,T3,T4,T5,T6,T7) Action(T1,T2,T3,T4,T5,T6,T7,T8) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16) So for this Action delegate you can have up to 16 parameters for Action.  Sound interesting!!… Enough theory now. It’s time to implement real code. Following is a code for that. using System; using System.Collections.Generic; namespace DelegateExample { class Program { static void Main(string[] args) { Action<String> Print = p => Console.WriteLine(p); Action<String,String> PrintAnother = (p1,p2)=> Console.WriteLine(string.Format("{0} {1}",p1,p2)); Print("Hello"); PrintAnother("Hello","World"); } } } In the above code you can see that I have created two Action delegate Print and PrintAnother. Print have one string parameter and its printing that. While PrintAnother have two string parameter and printing both the strings via Console.Writeline. Now it’s time to run example and following is the output as expected. That’s it. Hope you liked it. Stay tuned for more updates!!

    Read the article

  • Rewriting code under BSD license

    - by Frank
    I am currently studding OpengGL with OpenGL Supebible 5th edition. I've found interested for me some C++ code that is distributed with the book (see also on google code). That code is under New BSD License. I am writing my software on C# with SharpGL wrapper and I'd like to know following things: Can I rewrite that C++ to C#? edid: I'am interesting in using such things like GLBatch, GLShaderManager and some other thing from GLTools. Problem is that library is on C++, but I use C#. How do I have to mark my source code if I put it somewhere like to my github account? What disclaimer should be? Original disclaimer looks like: /* GLShaderManager.h Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Richard S. Wright Jr. nor the names of other contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Edit: Should my copyright looks like after rewriting something like that? Copyright (c) 2014, My Name Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution...................

    Read the article

  • Finding work as a college student

    - by lightburst
    I'm a 3rd year CS student. I'm currently really, really, bored and tired of cheap school programming(I go to a fairly respectable(albeit not top) university in my country, but, really, it's not MIT) so I've been thinking about getting a part-time dev job for a long while now. I'm not some hotshot programmer or anything, but "Add/Delete XYZ class objects to list" or "Do this web feature/pattern in PHP" does get old after a few semesters. I also only learned(heard?) of programming when I entered college, so the duration of me being in contact with any code is short. I can't really apply as an intern as I have not accumulated the necessary credits yet to do that so I was thinking of selling myself as a part-time dev. I still need to go to school, and don't want to subject myself to living two lives. Plus, I think I'd have better chances considering my lack of things to write on the resume. The only language I know at heart is C (I've written several pointer-oriented stuff on my freshman year, which is apparently pretty leet(for some reason), or so Joel says. That sort of boosted my morale a bit) but I've worked with several other languages only for the sake of course work such as C#/Java/PHP/ASM. My only user-worthy project was a recursive quicksort simulator I wrote for class using GTK+ that used a textbox to output the progress. I also have not taken the compiler theory class yet, or my thesis. All that being said, I wonder if any legitimate software company(big or small) would hire somebody like me considering all that. If there are companies that do accept anybody like me, would I be doing programming or maybe just be a tester or something? Would anybody hire me as a dev at all? I know I don't have much(not even a degree) but what I lack in experience, I compensate with interest? I am less interested in websites and 'management' software(no offense or anything. also, most of the places here ONLY have those), and more into 'process driven'(I'm not sure how to call it) and system software. I have my eyes on a startup that sells basically an extension of Google Drive, but I feel like I'm too 'risky' for them. Oh, I'm also 19 if that means anything. We're not K-12 so kids go to college earlier than in the US.

    Read the article

  • Adjusting the Score on Oracle Text search results

    - by Kyle Hatlestad
    When you sort the results of a search by Score using OracleTextSearch as the search engine in WebCenter Content, the results coming back are based on the relevancy on the document.  In theory, the more relevant the search term is to the document, the higher ranked Score it should receive.  But in practice, the relevancy score can seem somewhat of a mystery.  It's not entirely clear how it ranks the importance of some documents over others based on the search term.  And often times, once a word appears a certain number of times within a document, the Score simply maxes out at 100 and the top results can be difficult to discern from one another.  Take for example the search for 'vacation' on this set of documents: Out of 7 results, 6 of them have a Score of '100' which means they are basically ranked the same.  This doesn't make the sort by Score very meaningful.   Besides sorting by relevance, you can also tell Oracle Text to sort by occurrence.  In that case, it is a much more predictable result in how they would be ranked. And for many cases provide a more meaningful sorting of results then relevance. To change this takes a small component change to the SearchOperatorMap resource.  By default, the query used for full-text searching looks like: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), RELEVANCE * .1)</td> <td>text</td> Overriding this resource and changing it to: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), OCCURRENCE * .01)</td>  <td>text</td> will force it to now use occurrence (note the change in scale to .01 as well).  So running the same search and sort options as the example above, the results come out quite a bit differently: In this case, there is a clear understanding of how the items rank.   And generally, if the search term appears 3 times more in one document then another, it's got a better chance of being a document I'm interested in.  You may or may not feel the relevance ranking is better then the search term occurrence, but this provides the opportunity to try an alternate method that might work better for your results.  A pre-built component is available for download here. There is one caveat in using this method.  The occurrence ranking also maxes out at 100, so if a search term is in the document more then that, the Score result will stay at 100.

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • Music Notation Editor - Refactoring view creation logic elseware

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • What are some good realistic programming related movies (docu-dramas, documentaries, accurate fiction, etc)?

    - by EpsilonVector
    A while ago I asked this question and the result was this. Following the response I got in the meta question I'm re-asking the question with new guidelines to focus it on the direction I wanted it to have originally. ================================================================== The guidelines are as follows: by "programming related" I mean movies from which we can learn about stuff like the development process, or history of software/computers, or programming culture. In other words, they must be grounded in the industry. No tangential stuff. Good entries answer as many of the following criteria as possible: Teach you about the history of the industry, or the development process, or teach you about important industry related topics (software patents for example) Are based on real life events, companies, people, practices, and they are the main focus of the movie After watching them, you feel like you understand or know something about the programmers' world that you didn't before (or you can see how someone could have such a response). You can point to it and say "this faithfully represents the industry/programmer culture at some point in time". This might be something you would show laymen to explain to them what "your people" are like and what is it that you do. Examples for good entries include: Pirates of Silicon Valley- the story of how Microsoft and Apple started the industry. Revolution OS- The story of Linux's rise to fame, and a pretty good cover of the Free Software/Open Source world. Aardvark'd: 12 Weeks with Geeks- development process. Examples for bad entries: Movies who's sole relevance is that they can be appreciated by programmers. The point of this question is not to be "what are some good movies" with "for a programmer" appended to it. Just because the writers got a few computer jokes right in itself doesn't make it about the industry. Movies where there's a computer related element, but are not about the industry. For example, 24 (the TV series). It's a product of the information age but it isn't actually about it. Another example is movies where there's a really cool programmer character, but are overall about something completely different. Likewise, The Big Bang Theory is not about physics, even though they have a cool physicist as a character. Science fiction, even if it draws ideas from computers. For example, the Matrix trilogy. Movies that you can't point to them and say: this is a faithful representation of our world (at some point in time). If you can't do that then it doesn't mirror the industry. Keep it one entry per answer so that the voting could sort the entries out.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >