Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 182/356 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • How to recover bad encripted directory

    - by Fato Alessandro
    I had a problem while formatting Ubuntu. I tried to reinstall without formatting the home directory and with the same username. The home directory of the new installation was set to be encrypted. Then the installation went wrong because of the cd. So it really never started (stopped at coping stage). How ever Ubuntu did encrypted the home directory but probably the procedure went wrong. By now I installed Ubuntu in another partition, tried to mount with encrypted-recovery but the mounted directory in tmp wasn't the directory I had before. There were just strange directories with coded name. Strange fact is that the file system is not damaged: it continues to know how much data is actually stored in it. If I look with gparted or even nautilus I see 45 Gb of data present on the partition. This let me think that my data are not erased but maybe hidden. Moreover when I tried to mount the encrypted home directory with encrypted-recovery-personal it asked me the encryption secret. I insert nothing, just pressed enter, and the password was accepted. Is thre a method for removing my data? Maybe trying to rencrypt the directory? How could I get back to the previous documents. Thanks to everyone

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • CodePlex Daily Summary for Monday, September 30, 2013

    CodePlex Daily Summary for Monday, September 30, 2013Popular ReleasesWDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.5apifix-alpha: WDTVHubGen.v2.1.5apifix-alpha updated to fix the imdb look up problem. working on other problems but wanted this out there for testing.Visual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0Random searcher i pochodne: Generatorek playlisty: Generuje playlisty w formacie .m3u. Na razie beta z bety - ale juz dziala i mozna uzywac.sb0t v.5: sb0t 5.15: Fixed bug in join filter. Fixed bug in pm blocking. Added new Crypto and Entities static classes to scripting. Updated the default node list.Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.9.29): Initial releaseAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...HD-Trailers.NET Downloader: HD-Trailer.Net Downloader v 2.1.5: This started out as an effort to improve the search for the corr3ct IMDB page for the movie. I think I have done that here. I have run about 200 movies and the correct movie was identified in all cases including some entries that were problematic in the past. I also swatted several bugs that popped up under special circumstances and resulted in exceptions. This version should be quite a bit better than previous versions. Let me know if there are any issues.Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...C# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.New ProjectsBeerStats: Beeeeeeer!CodeSet: ttfishteam: Connecting people by their music interestsGassFlow: This is a Computational Method for Genome Annotation based on Species Similarity.GasTeam: connecting peopleHermess Branch: testhermesbranchHyperPage: HyperPage is a HTML\PHP\CSS programming studio. Written in VB12 features folders, files, intellisense, file searching and color selector.JPEG Auto-rotate: A shell extension that automatically rotates JPEG images based on the orientation stored in their EXIF tag (pictures taken with modern smartphones/cameras).Lu?n van t?t nghi?p K09: Phân tích thái d? c?m xúc ngu?i dùng m?ng xã h?iMixDoS: MixDoS is an application who helps you control your computer and helps you collect all your batch applications! This application is easier to use than X3ME!NeoLua: A Lua implementation for the Dynamic Language Runtime (DLR).SharePoint Helpers: A cross product JavaScript and .NET library to simplify SharePoint (2007, 2010, 2013, Office 365) development and make migration easier.Snake Board Game: Childhood is always a happy time, Classical game will never left out from our Memories here is my first game that i have created when i was studying....spmisframework: SpmisFrameworkTFSTest: Just for testTrace Reader for Microsoft Dynamics CRM: Trace reader for Microsoft Dynamics CRM helps you reading the trace files generated by Microsoft Dynamics CRM (4.0, 2011 and 2013) on a graphical interfaceTuple Edit: Editor/IDE for multiple languages.ultvast: utlimus for vastUser Cloner for Dynamics CRM 2011: User Cloner for Dynamics CRM 2011 User Cloner for Dynamics CRM 2011 is utility for all CRM administrators, consultant who have to deal with user issues on CRM WHKY: testWhoIs.dart: Tool to query Whois servers, implemented in Dart.winchrome: Provides different types of window chrome for NavigationWindowsYLH_CRM_Project: This is used to reconstruct ylh of crm

    Read the article

  • ASP.NET Session Management

    - by geekrutherford
    Great article (a little old but still relevant) about the inner workings of session management in ASP.NET: Underpinnings of the Session State Management Implementation in ASP.NET.   Using StateServer and the BinaryFormatter serialization occuring caused me quite the headache over the last few days. Curiously, it appears the w3wp.exe process actually consumes more memory when utilizing StateServer and storing somewhat large and complex data types in session.   Users began experiencing Out Of Memory exceptions in the production environment. Looking at the stack trace it related to serialization using the BinaryFormatter. Using remote debugging against our QA server I noted that the code in the application functioned without issue. The exception occured outside the context of the application itself when the request had completed and the web server was trying to serialize session state into the StateServer.   The short term solution is switching back to the InProc method. Thus far this has proven to consume considerably less memory and has caused no issues. Long term the complex object stored in session will be off-loaded into a web service used to access the information directly from the database outside the context of the object used to encapsulate it.

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • Is there a LOGO interpreter that actually has a turtle?

    - by Tim Post
    This is not a repeat of the now infamous "How do I move the turtle in LOGO?" Recently, I had the following conversation with my five year old daughter: Daughter: Daddy, do you write programs? Me: Yes! Daughter: Daddy, what's a program? Me: A program is a set of instructions that a computer follows. Daughter: Daddy, can I write a program too? Me: Sure! This got me scrambling to think of a very basic language that a five year old could get some satisfaction from mastering rather quickly. I'm ashamed to admit that the first thing that came to mind was this: 10 INPUT "Tell me a secret" A$ 20 PRINT "Wow really? :" A$ 30 GOTO 10 That isn't going to hold a five year old's attention for very long and it requires too much of a lecture. However, moving a turtle around and drawing neat pictures might just work. Sadly, my search for a LOGO interpreter yielded noting but ad ridden sites, flight simulators and a whole bunch of other stuff that I really don't want. I'm hoping to find a cross platform (Java / Python) LOGO interpreter (dare I call it simulator?) with the following features: Can save / replay commands (stored programs) Has an actual turtle Sound effects are a plus Have you stumbled across something like this, if so, can you provide a link? I hate to ask a 'shopping' sort of question, but it seemed much better than "Is LOGO appropriate for a five year old?"

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Reasonable technological solutions to create CRM using .NET eventually Java

    - by user1825608
    My background(If it's too long, just skip it please ; ) ): I am Java programmer(because of demand): mostly teacher for other students, worked on few thesis for others, but during my journey I discovered that .NET and Microsoft's tools are on at least two levels higher than Java and its tools so I want to learn more about them. I programmed little bit on Windows Phone(NFC Tags, TCP Clients, guitar tuner using internal microphone, simple RSS), used WPF, integrated WPF with Windows Forms, Apple Bonjour(.NET), I have expierience with IP cameras and with unusal problems, I learn Android, but I don't like it at all. Problem: I was asked by my friend to create CRM for small new company. There will maximum 20 workers in the company working at computers in few cities in the country(Poland). They just want to store contracts with the clients and client's data. I am not sure what exacly they do but probably sell apartments so there will be at most few thousands of contracts to store in far future. Now I am totally new to CRM but I want to learn. I have few questions: Should the data be stored on a server in the company's building running 24/7 or cloud. If cloud which one? Should I use ASPX or WPF. I read one topic about it but as far as I know aspx sites can be viewed from every device with internet browser: tablets, phones(Android, WP, iOS) and computers at the same time- so the job is done once and for all(Am I right?), I don't know nothing about aspx. Can WPF be also used in manner that does not need to port it for other platforms?

    Read the article

  • How can I change the color of this part of Nautilus for my Ambiance theme modification?

    - by WarriorIng64
    I am currently messing around with Ambiance, trying to give Nautilus a dark sidebar (because I think it looks much better that way, especially with the current look having the dark-colored breadcrumbs clashing horribly with the light-colored sidebar). I have zero experience and knowledge of how to create GTK+ themes, and I couldn't find any documentation online, so I just made a copy of the folder for Ambiance under /usr/share/themes, renamed it "Ambiance Dark Sidebar" and just started messing with color values. As shown below, I found the value in nautilus.css needed to be tweaked to create the dark sidebar, but there is still one part that stubbornly stays light gray that I want to change so it matches the rest better (I'm not sure what the proper terminology is here, so I just provided a picture and marked it in red). Does anyone know what I need to do to change the color of this part so it matches the rest of the sidebar better? I already know from seeing themes like Adwaita Dark that this should be possible, but even after poking around in that I didn't find anything that seemed to help. Here are the contents of the files I modified in the theme folder Ambiance Dark Sidebar, stored alongside Ambiance in /usr/share/themes: index.theme gtk-3.0/apps/nautilus.css

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • iOS chat application design, sending/relaying the message over to the end user

    - by AyBayBay
    I have a design question. Let us say you were tasked with building a chat application, specifically for iOS (iOS Chat Application). For simplicity let us say you can only chat with one person at a time (no group chat functionality). How then can you achieve sending a message directly to an end user from phone A to phone B? Obviously there is a web service layer with some API calls. One of the API calls available will be startChat(). After starting a chat, when you send a message, you make another async call, let us call it sendMessage() and pass in a string with your message. Once it goes to the web service layer, the message gets stored in a database. Here is where I am currently stuck. After the message gets sent to the web service layer, how do we then achieve sending/relaying the message over to the end user? Should the web server send out a message to the end user and notify them, or should each client call a receiveMessage() method periodically, and if the server side has some info for them it can then respond with that info? Finally, how can we handle the case in which the user you are trying to send a message to is offline? How can we make sure the end user gets the packet when he moves back to an area with signal?

    Read the article

  • How do I get the correct values from glReadPixels in OpenGL 3.0?

    - by NoobScratcher
    I'm currently trying to Implement mouse selection into my game editor and I ran into a little problem when I look at the values stored in &pixel[0],&pixel[1],&pixel[2],&pixel[3]; I get r: 0 g: 0 b: 0 a: 0 As you can see I'm not able to get the correct values from glReadPixels(); My 3D models are red colored using glColor3f(255,0,0); I was hoping someone could help me figure this out. Here is the source code: case WM_LBUTTONDOWN: { GetCursorPos(&pos); ScreenToClient(hwnd, &pos); GLenum err = glGetError(); while (glGetError() != GL_NO_ERROR) {cerr << err << endl;} glReadPixels(pos.x, SCREEN_HEIGHT - 1 - pos.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &pixel[0] ); cerr << "r: "<< (int)pixel[0] << endl; cerr << "g: "<< (int)pixel[1] << endl; cerr << "b: "<< (int)pixel[2] << endl; cerr << "a: "<< (int)pixel[3] << endl; cout << pos.x << endl; cout << pos.y << endl; } break; I use : WIN32 API OPENGL 3.0 C++

    Read the article

  • Auto-hydrate your objects with ADO.NET

    - by Jake Rutherford
    Recently while writing the monotonous code for pulling data out of a DataReader to hydrate some objects in an application I suddenly wondered "is this really necessary?" You've probably asked yourself the same question, and many of you have: - Used a code generator - Used a ORM such as Entity Framework - Wrote the code anyway because you like busy work     In most of the cases I've dealt with when making a call to a stored procedure the column names match up with the properties of the object I am hydrating. Sure that isn't always the case, but most of the time it's 1 to 1 mapping.  Given that fact I whipped up the following method of hydrating my objects without having write all of the code. First I'll show the code, and then explain what it is doing.      /// <summary>     /// Abstract base class for all Shared objects.     /// </summary>     /// <typeparam name="T"></typeparam>     [Serializable, DataContract(Name = "{0}SharedBase")]     public abstract class SharedBase<T> where T : SharedBase<T>     {         private static List<PropertyInfo> cachedProperties;         /// <summary>         /// Hydrates derived class with values from record.         /// </summary>         /// <param name="dataRecord"></param>         /// <param name="instance"></param>         public static void Hydrate(IDataRecord dataRecord, T instance)         {             var instanceType = instance.GetType();                         //Caching properties to avoid repeated calls to GetProperties.             //Noticable performance gains when processing same types repeatedly.             if (cachedProperties == null)             {                 cachedProperties = instanceType.GetProperties().ToList();             }                         foreach (var property in cachedProperties)             {                 if (!dataRecord.ColumnExists(property.Name)) continue;                 var ordinal = dataRecord.GetOrdinal(property.Name);                 var isNullable = property.PropertyType.IsGenericType &&                                  property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>);                 var isNull = dataRecord.IsDBNull(ordinal);                 var propertyType = property.PropertyType;                 if (isNullable)                 {                     if (!string.IsNullOrEmpty(propertyType.FullName))                     {                         var nullableType = Type.GetType(propertyType.FullName);                         propertyType = nullableType != null ? nullableType.GetGenericArguments()[0] : propertyType;                     }                 }                 switch (Type.GetTypeCode(propertyType))                 {                     case TypeCode.Int32:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt32(ordinal), null);                         break;                     case TypeCode.Double:                         property.SetValue(instance,                                           (isNullable && isNull) ? (double?) null : dataRecord.GetDouble(ordinal),                                           null);                         break;                     case TypeCode.Boolean:                         property.SetValue(instance,                                           (isNullable && isNull) ? (bool?) null : dataRecord.GetBoolean(ordinal),                                           null);                         break;                     case TypeCode.String:                         property.SetValue(instance, (isNullable && isNull) ? null : isNull ? null : dataRecord.GetString(ordinal),                                           null);                         break;                     case TypeCode.Int16:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt16(ordinal), null);                         break;                     case TypeCode.DateTime:                         property.SetValue(instance,                                           (isNullable && isNull)                                               ? (DateTime?) null                                               : dataRecord.GetDateTime(ordinal), null);                         break;                 }             }         }     }   Here is a class which utilizes the above: [Serializable] [DataContract] public class foo : SharedBase<foo> {     [DataMember]     public int? ID { get; set; }     [DataMember]     public string Name { get; set; }     [DataMember]     public string Description { get; set; }     [DataMember]     public string Subject { get; set; }     [DataMember]     public string Body { get; set; }            public foo(IDataRecord record)     {         Hydrate(record, this);                }     public foo() {} }   Explanation: - Class foo inherits from SharedBase specifying itself as the type. (NOTE SharedBase is abstract here in the event we want to provide additional methods which could be overridden by the instance class) public class foo : SharedBase<foo> - One of the foo class constructors accepts a data record which then calls the Hydrate method on SharedBase passing in the record and itself. public foo(IDataRecord record) {      Hydrate(record, this); } - Hydrate method on SharedBase will use reflection on the object passed in to determine its properties. At the same time, it will effectively cache these properties to avoid repeated expensive reflection calls public static void Hydrate(IDataRecord dataRecord, T instance) {      var instanceType = instance.GetType();      //Caching properties to avoid repeated calls to GetProperties.      //Noticable performance gains when processing same types repeatedly.      if (cachedProperties == null)      {           cachedProperties = instanceType.GetProperties().ToList();      } . . . - Hydrate method on SharedBase will iterate each property on the object and determine if a column with matching name exists in data record foreach (var property in cachedProperties) {      if (!dataRecord.ColumnExists(property.Name)) continue;      var ordinal = dataRecord.GetOrdinal(property.Name); . . . NOTE: ColumnExists is an extension method I put on IDataRecord which I’ll include at the end of this post. - Hydrate method will determine if the property is nullable and whether the value in the corresponding column of the data record has a null value var isNullable = property.PropertyType.IsGenericType && property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>); var isNull = dataRecord.IsDBNull(ordinal); var propertyType = property.PropertyType; . . .  - If Hydrate method determines the property is nullable it will determine the underlying type and set propertyType accordingly - Hydrate method will set the value of the property based upon the propertyType   That’s it!!!   The magic here is in a few places. First, you may have noticed the following: public abstract class SharedBase<T> where T : SharedBase<T> This says that SharedBase can be created with any type and that for each type it will have it’s own instance. This is important because of the static members within SharedBase. We want this behavior because we are caching the properties for each type. If we did not handle things in this way only 1 type could be cached at a time, or, we’d need to create a collection that allows us to cache the properties for each type = not very elegant.   Second, in the constructor for foo you may have noticed this (literally): public foo(IDataRecord record) {      Hydrate(record, this); } I wanted the code for auto-hydrating to be as simple as possible. At first I wasn’t quite sure how I could call Hydrate on SharedBase within an instance of the class and pass in the instance itself. Fortunately simply passing in “this” does the trick. I wasn’t sure it would work until I tried it out, and fortunately it did.   So, to actually use this feature when utilizing ADO.NET you’d do something like the following:        public List<foo> GetFoo(int? fooId)         {             List<foo> fooList;             const string uspName = "usp_GetFoo";             using (var conn = new SqlConnection(_dbConnection))             using (var cmd = new SqlCommand(uspName, conn))             {                 cmd.CommandType = CommandType.StoredProcedure;                 cmd.Parameters.Add(new SqlParameter("@FooID", SqlDbType.Int)                                        {Direction = ParameterDirection.Input, Value = fooId});                 conn.Open();                 using (var dr = cmd.ExecuteReader())                 {                     fooList= (from row in dr.Cast<DbDataRecord>()                                             select                                                 new foo(row)                                            ).ToList();                 }             }             return fooList;         }   Nice! Instead of having line after line manually assigning values from data record to an object you simply create a new instance and pass in the data record. Note that there are certainly instances where columns returned from stored procedure do not always match up with property names. In this scenario you can still use the above method and simply do your manual assignments afterward.

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • How can I change the color of the pane separator for my Ambiance theme modification?

    - by WarriorIng64
    I am currently messing around with Ambiance, trying to give Nautilus a dark sidebar (because I think it looks much better that way, especially with the current look having the dark-colored breadcrumbs clashing horribly with the light-colored sidebar). I have zero experience and knowledge of how to create GTK+ themes, and I couldn't find any documentation online, so I just made a copy of the folder for Ambiance under /usr/share/themes, renamed it "Ambiance Dark Sidebar" and just started messing with color values. As shown below, I found the value in nautilus.css needed to be tweaked to create the dark sidebar, but there is still one part that stubbornly stays light gray. This is the pane separator, and I want to change it so it matches the rest better (marked in red). Does anyone know what I need to do to change the color of this part so it matches the rest of the sidebar better? I already know from seeing themes like Adwaita Dark that this should be possible, but even after poking around in that I didn't find anything that seemed to help. Here are the contents of the files I modified in the theme folder Ambiance Dark Sidebar, stored alongside Ambiance in /usr/share/themes: index.theme gtk-3.0/apps/nautilus.css

    Read the article

  • Weird behavior when debugging ASP.NET Web application: cookie expires (1/1/0001 12:00AM) by itself on next breakpoint hit.

    - by evovision
    I'm working on ajaxified (Telerik AJAX Manager) ASP.NET application using Visual Studio 2010 (runs with admin privileges) and IIS 7.5. Basically, everything on the page is inside update panels. As for cookies I have custom encrypted "settings" cookie which is added to Response if it's not there on session start. Application runs smoothly, problem was arising when I started the debugging it: Actions:  no breakpoints set, F5 - application has started in debug mode, browser window loaded. I login to site, click on controls, all is fine. Next I set *any* breakpoint somewhere in code, break on it then let it continue running, but once I break again (immediately after first break) and check cookie: it has expired date 1/1/0001 12:00AM and no data in value property. I was storing current language there, which was used inside Page's InitializeCulture event and obviously exception was being raised. I spent several hours trying deleting browser cache, temporary ASP.NET files etc, nothing seemed to work. Same application has been tested on exactly same environment on another PC and no problems with debugging there. After all I've found the solution: visual studio generates for every solution additional .suo file where additional settings are stored, like UI state, breakpoints info, etc, so I deleted it and loaded project again, tried debugging - everything is ok now.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >