Search Results

Search found 25071 results on 1003 pages for 'information overload'.

Page 3/1003 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • New Information Center - Reviewing Security For FMW 11g

    - by Daniel Mortimer
    Announcing ... Information Center: Reviewing Security For Oracle Fusion Middleware 11g [ID 1458051.2] has been published.  Screenshot of ID 1458051.2 What is an Information Center? Information Centers use widgets to aggregate knowledge content, such as support documents, product documentation, support community threads, which is pertinent to a given task or intent. Widgets either contain static lists or better still some widgets are dynamic. A dynamic widget uses a query criteria to present a list of support documents relevant to the title / subject matter of the widget. The content of a dynamic widget is refreshed automatically every 24 hours. Once you are in an Information Center, you can use the left hand menu to navigate to other Tasks / Intent Information Centers (e.g "Install and Configure", "Patch", "Troubleshoot", "Upgrade" which are available for the chosen product. Are Information Centers easy to find? You can go straight to the new "Reviewing Security" Information Center by using the hyperlink given above. There are, however, two other methods which make Information Centers easier to find. Browse Knowledge Refine Your Search Browse Knowledge The "Browse Knowledge" is currently found in the "Knowledge" Tab Page in My Oracle Support. As illustrated by the screenshots below, you can find Information Centers by choosing a product (e.g "Oracle Fusion Middleware"), a version and an action / intent. If an Information Center exists for your selection the "Advisor Found" button is enabled. Clicking on this button will take you straight to the desired Information Center.Screenshot - Browse Knowledge 1 Screenshot - Browse Knowledge 2 Screenshot - Browse Knowledge 3 Refine Your Search Refine your search is a dialogue which is triggered by certain keywords that you may enter into the Global Search field in the top right hand corner of My Oracle Support. The "Refine Your Search" works in a similar manner to "Browse Knowledge". Choose your product and version. The appropriate Task / Intent should already be selected for you. Thereafter, click the Go button. Screenshot - Refine Your Search 1 Screenshot - Refine Your Search 2 Screenshot - Refine Your Search 3

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Opensource package for securly allowing users to log in and provide information

    - by JTS
    I have a site written in mostly php and html. I also have a sql database of personal information like names and addresses. I would like my users to be able to log in to my website with a login I can email or snail mail to them, and view and edit their information on my database. Users can currently enter information online I and store it in my database but they can't view or edit stored information. I can add the code to do this, but when I give users the ability to view information I suddenly have a lot more security concerns. Is there an open source package to deal with allowing users to do something like this? Or is there an established convention for this? I know this is a pretty basic question, and there might be some good literature about it that I have yet to find, so if someone can just point me in the direction of some of that information, or better yet give me firsthand some information about this that would be great.

    Read the article

  • How to encrypt information in aspx page?

    - by Jimmyc
    Hi all, I know it's a silly question but , My client asked for encrypting some information form their payment system to prevent user stealing personal information. The system is web-base and written by ASP.NET We have tried some annoying solution such as JavaScript no right-click or css-no-print but apparently my client didn't like it. so are there any commercial solution to encrypt information in aspx produced html pages? or someone can tell me how to pursuit my client to stop these "prevent stealing" idea in a web-base system?

    Read the article

  • Is There A Central Repository of Javascript Information?

    - by Brian
    For example, if you want information of PHP functions, you can go to http://www.php.net/ . If you want information of Perl functions you can to to http://www.cpan.org/ and/or use perldoc. If you want information on Java you can go to http://java.sun.com and/or use javadoc. However, if you want information on Javascript methods/functions and their attributes, return values, etc. where do you go? The reason I ask is I was playing with the "focus()" method and wondering if it could be passed any values or if it returned anything when called. I have done a cursory Google search but haven't found much. Does such a beast exist or am I out of luck? Thanks for reading, and have a good day.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • Java SecurityException : signer information does not match

    - by Frank
    I recompiled my classes as usual, and suddenly got the following error message, why ? How to fix it ? "java.lang.SecurityException: class "Chinese_English_Dictionary"'s signer information does not match signer information of other classes in the same package at java.lang.ClassLoader.checkCerts(ClassLoader.java:776)

    Read the article

  • Information Extraction Toolkits

    - by MathGladiator
    I'm looking for information extraction libraries where I can have semi structured information that may have either hidden or incomplete data. I want to train some classifiers to pull out content based on the structure. I'm working on building a tool where I can select text in the browser, and it will generate (via some web service call) a classifier that can be used on other documents to pull out text. I'm primarily looking at how the structure of the document can be used to indicate what the content is.

    Read the article

  • Media Information Extractor for Java

    - by eyazici
    I need a media information extraction library (pure Java or JNI wrapper) that can handle common media formats. I primarily use it for video files and I need at least these information: Video length (Runtime) Video bitrate Video framerate Video format and codec Video size (width X height) Audio channels Audio format Audio bitrate and sampling rate There are several libraries and tools around but I couldn't find for Java.

    Read the article

  • this is for oracle apps project module information

    - by nil
    hi to all i m taking about oracle apps project module in project module there was project status inquiry inside that percent complete functionality i want require information what is this functionality and it's update automatically or require run any requites or other please give me information what this function do thanks nil

    Read the article

  • Overloading stream insertion without violating information hiding?

    - by Chris
    I'm using yaml-cpp for a project. I want to overload the << and >> operators for some classes, but I'm having an issue grappling with how to "properly" do this. Take the Note class, for example. It's fairly boring: class Note { public: // constructors Note( void ); ~Note( void ); // public accessor methods void number( const unsigned long& number ) { _number = number; } unsigned long number( void ) const { return _number; } void author( const unsigned long& author ) { _author = author; } unsigned long author( void ) const { return _author; } void subject( const std::string& subject ) { _subject = subject; } std::string subject( void ) const { return _subject; } void body( const std::string& body ) { _body = body; } std::string body( void ) const { return _body; } private: unsigned long _number; unsigned long _author; std::string _subject; std::string _body; }; The << operator is easy sauce. In the .h: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ); And in the .cpp: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ) { out << v.number() << v.author() << v.subject() << v.body(); return out; } No sweat. Then I go to declare the >> operator. In the .h: void operator >> ( const YAML::Node& node, Note& note ); But in the .cpp I get: void operator >> ( const YAML::Node& node, Note& note ) { node[0] >> ? node[1] >> ? node[2] >> ? node[3] >> ? return; } If I write things like node[0] >> v._number; then I would need to change the CV-qualifier to make all of the Note fields public (which defeats everything I was taught (by professors, books, and experience))) about data hiding. I feel like doing node[0] >> temp0; v.number( temp0 ); all over the place is not only tedious, error-prone, and ugly, but rather wasteful (what with the extra copies). Then I got wise: I attempted to move these two operators into the Note class itself, and declare them as friends, but the compiler (GCC 4.4) didn't like that: src/note.h:44: error: ‘YAML::Emitter& Note::operator<<(YAML::Emitter&, const Note&)’ must take exactly one argument src/note.h:45: error: ‘void Note::operator(const YAML::Node&, Note&)’ must take exactly one argument Question: How do I "properly" overload the >> operator for a class Without violating the information hiding principle? Without excessive copying?

    Read the article

  • Renault under threat from industrial espionage, intellectual property the target

    - by Simon Thorpe
    Last year we saw news of both General Motors and Ford losing a significant amount of valuable information to competitors overseas. Within weeks of the turn of 2011 we see the European car manufacturer, Renault, also suffering. In a recent news report, French Industry Minister Eric Besson warned the country was facing "economic war" and referenced a serious case of espionage which concerns information pertaining to the development of electric cars. Renault senior vice president Christian Husson told the AFP news agency that the people concerned were in a "particularly strategic position" in the company. An investigation had uncovered a "body of evidence which shows that the actions of these three colleagues were contrary to the ethics of Renault and knowingly and deliberately placed at risk the company's assets", Mr Husson said. A source told Reuters on Wednesday the company is worried its flagship electric vehicle program, in which Renault with its partner Nissan is investing 4 billion euros ($5.3 billion), might be threatened. This casts a shadow over the estimated losses of Ford ($50 million) and General Motors ($40 million). One executive in the corporate intelligence-gathering industry, who spoke on condition of anonymity, said: "It's really difficult to say it's a case of corporate espionage ... It can be carelessness." He cited a hypothetical example of an enthusiastic employee giving away too much information about his job on an online forum. While information has always been passed and leaked, inadvertently or on purpose, the rise of the Internet and social media means corporate spies or careless employees are now more likely to be found out, he added. We are seeing more and more examples of where companies like these need to invest in technologies such as Oracle IRM to ensure such important information can be kept under control. It isn't just the recent release of information into the public domain via the Wikileaks website that is of concern, but also the increasing threats of industrial espionage in cases such as these. Information rights management doesn't totally remove the threat, but abilities to control documents no matter where they exist certainly increases the capabilities significantly. Every single time someone opens a sealed document the IRM system audits the activity. This makes identifying a potential source for a leak much easier when you have an absolute record of every person who's had access to the documents. Oracle IRM can also help with accidental or careless loss. Often people use very sensitive information all the time and forget the importance of handling it correctly. With the ability to protect the information from screen shots and prevent people copy and pasting document information into social networks and other, unsecured documents, Oracle IRM brings a totally new level of information security that would have a significant impact on reducing the risk these organizations face of losing their most valuable information.

    Read the article

  • wrong operator() overload called

    - by user313202
    okay, I am writing a matrix class and have overloaded the function call operator twice. The core of the matrix is a 2D double array. I am using the MinGW GCC compiler called from a windows console. the first overload is meant to return a double from the array (for viewing an element). the second overload is meant to return a reference to a location in the array (for changing the data in that location. double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element I am writing a testing routine and have discovered that the "viewing" overload never gets called. for some reason the compiler "defaults" to calling the overload that returns a reference when the following printf() statement is used. fprintf(outp, "%6.2f\t", testMatD(i,j)); I understand that I'm insulting the gods by writing my own matrix class without using vectors and testing with C I/O functions. I will be punished thoroughly in the afterlife, no need to do it here. Ultimately I'd like to know what is going on here and how to fix it. I'd prefer to use the cleaner looking operator overloads rather than member functions. Any ideas? -Cal the matrix class: irrelevant code omitted class Matrix { public: double getElement(int row, int col)const; //returns the element at row,col //operator overloads double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element private: //data members double **array; //pointer to data array }; double Matrix::getElement(int row, int col)const{ //transform indices into true coordinates (from sorted coordinates //only row needs to be transformed (user can only sort by row) row = sortedArray[row]; result = array[usrZeroRow+row][usrZeroCol+col]; return result; } //operator overloads double Matrix::operator()(int row, int col) const { //this overload is used when viewing an element return getElement(row,col); } double &Matrix::operator()(int row, int col){ //this overload is used when placing an element return array[row+usrZeroRow][col+usrZeroCol]; } The testing program: irrelevant code omitted int main(void){ FILE *outp; outp = fopen("test_output.txt", "w+"); Matrix testMatD(5,7); //construct 5x7 matrix //some initializations omitted fprintf(outp, "%6.2f\t", testMatD(i,j)); //calls the wrong overload }

    Read the article

  • Best practices regarding equals: to overload or not to overload?

    - by polygenelubricants
    Consider the following snippet: import java.util.*; public class EqualsOverload { public static void main(String[] args) { class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } public boolean equals(Thing other) { return this.x == other.x; } } List<Thing> myThings = Arrays.asList(new Thing(42)); System.out.println(myThings.contains(new Thing(42))); // prints "false" } } Note that contains returns false!!! We seems to have lost our things!! The bug, of course, is the fact that we've accidentally overloaded, instead of overridden, Object.equals(Object). If we had written class Thing as follows instead, then contains returns true as expected. class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } @Override public boolean equals(Object o) { return (o instanceof Thing) && (this.x == ((Thing) o).x); } } Effective Java 2nd Edition, Item 36: Consistently use the Override annotation, uses essentially the same argument to recommend that @Override should be used consistently. This advice is good, of course, for if we had tried to declare @Override equals(Thing other) in the first snippet, our friendly little compiler would immediately point out our silly little mistake, since it's an overload, not an override. What the book doesn't specifically cover, however, is whether overloading equals is a good idea to begin with. Essentially, there are 3 situations: Overload only, no override -- ALMOST CERTAINLY WRONG! This is essentially the first snippet above Override only (no overload) -- one way to fix This is essentially the second snippet above Overload and override combo -- another way to fix The 3rd situation is illustrated by the following snippet: class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } public boolean equals(Thing other) { return this.x == other.x; } @Override public boolean equals(Object o) { return (o instanceof Thing) && (this.equals((Thing) o)); } } Here, even though we now have 2 equals method, there is still one equality logic, and it's located in the overload. The @Override simply delegates to the overload. So the questions are: What are the pros and cons of "override only" vs "overload & override combo"? Is there a justification for overloading equals, or is this almost certainly a bad practice?

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • How does the method overload resolution system decide which method to call when a null value is passed?

    - by Joan Venge
    So for instance you have a type like: public class EffectOptions { public EffectOptions ( params object [ ] options ) {} public EffectOptions ( IEnumerable<object> options ) {} public EffectOptions ( string name ) {} public EffectOptions ( object owner ) {} public EffectOptions ( int count ) {} public EffectOptions ( Point point ) {} } Here I just give the example using constructors but the result will be the same if they were non-constructor methods on the type itself, right? So when you do: EffectOptions options = new EffectOptions (null); which constructor would be called, and why? I could test this myself but I want to understand how the overload resolution system works (not sure if that's what it's called).

    Read the article

  • Mutual Information / Entropy Calculation Help

    - by Fillip
    Hi, Hoping someone can give me some pointers with this entropy problem. Say X is chosen randomly from the uniform integer distribution 0-32 (inclusive). I calculate the entropy, H(X) = 32 bits, as each Xi has equal probability of occurring. Now, say the following pseudocode executes. int r = rand(0,1); // a random integer 0 or 1 r = r * 33 + X; How would I work out the mutual information between the two variables r and X? Mutual Information is defined as I(X; Y) = H(X) - H(X|Y) but I don't really understand how to apply the conditional entropy H(X|Y) to this problem. Thanks

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Adding Fake Build Information in TFS 2010

    - by Jakob Ehn
    We have been using TFS 2010 build for distributing a build in parallel on several agents, but where the actual compilation is done by a bunch of external tools and compilers, e.g. no MSBuild involved. We are using the ParallelTemplate.xaml template that Jim Lamb blogged about previously, which distributes each configuration to a different agent. We developed custom activities for running these external compilers and collecting the information and errors by reading standard out/error and pushing it back to the build log. But since we aren’t using MSBuild we don’t the get nice configuration summary section on the build summary page that we are used to. We would like to show the result of each configuration with any errors/warnings as usual, together with a link to the log file. TFS 2010 API to the rescue! What we need to do is adding information to the InformationNode structure that is associated with every TFS build. The log that you normally see in the Log view is built up as a tree structure of IBuildInformationNode objects. This structure can we accessed by using the InformationNodeConverters class. This class also contain some helper methods for creating BuildProjectNode, which contain the information about each project that was build, for example which configuration, number of errors and warnings and link to the log file. Here is a code snippet that first creates a “fake” build from scratch and the add two BuildProjectNodes, one for Debug|x86 and one for Release|x86 with some release information:   TfsTeamProjectCollection collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://lt-jakob2010:8080/tfs")); IBuildServer buildServer = collection.GetService<IBuildServer>(); var buildDef = buildServer.GetBuildDefinition("TeamProject", "BuildDefinition"); //Create fake build with random build number var detail = buildDef.CreateManualBuild(new Random().Next().ToString()); // Create Debug|x86 project summary IBuildProjectNode buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Debug", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 1; buildProjectNode.CompilationWarnings = 1; buildProjectNode.Node.Children.AddBuildError("Compilation", "File1.cs", 12, 5, "", "Syntax error", DateTime.Now); buildProjectNode.Node.Children.AddBuildWarning("File2.cs", 3, 1, "", "Some warning", DateTime.Now, "Compilation"); buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfiledebug.txt")); buildProjectNode.Save(); // Create Releaes|x86 project summary buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Release", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 0; buildProjectNode.CompilationWarnings = 0; buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfilerelease.txt")); buildProjectNode.Save(); detail.Information.Save(); detail.FinalizeStatus(BuildStatus.Failed); When running this code, it will a create a build that looks like this: As you can see, it created two configurations with error and warning information and a link to a log file. Just like a regular MSBuild would have done. This is very useful when using TFS 2010 Build in heterogeneous environments. It would also be possible to do this when running compilations completely outside TFS build, but then push the results of the into TFS for easy access. You can push all information, including the compilation summary, drop location, test results etc using the API.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >