Search Results

Search found 6178 results on 248 pages for 'section'.

Page 120/248 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • HUGE EF4 Inheritance Bug

    - by djsolid
    Well maybe not for everyone but for me is definitely really important. That is why I get straight into the point. We have the following model: Which maps to the following database: We are using EF4.0 and we want to load all Burgers including BurgerDetails. So we write the following query: But it fails. The error is : “The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[SampleEFDBModel.Food]' but the required type is 'Transient.reference[SampleEFDBModel.Burger]'.Parameter name: arguments[0]”   So in the new version of EF there is no way to eager load data through Navigation Properties with 1-1 relationships defined in subclasses. Here is the relevant Microsoft Connect Issue. It is described through an other example but the result is the same.  Please if you think this is important vote up on Microsoft Connect.   EF 4.0 has many improvements. I am using it since v1 in large-scale projects and this version is faster,produces cleaner sql, more reliable and can be used for complicated business scenarios. That is why I believe this issue should be solved as soon as possible. I understand that release cycles are slow but I am hoping atleast for a hotfix. I also have uploaded the example project so you can test it. Download it from here. If anyone has found any workarounds please post it in the comments section. Thanks!

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • Regular Expressions Quick Reference

    - by Jan Goyvaerts
    The Regular-Expressions.info website has a new quick reference to regular expressions that lists all of the regex syntax in one single table along with a link to the tutorial section that explains the syntax. The quick reference is ordered by syntax whereas the full reference tables are ordered by feature. There are multiple entries for some of the syntax as different regex flavors may use the same syntax for different features. Use the quick reference if you’ve seen some syntax in somebody else’s regex and you have no idea what feature that syntax is for. Use the full reference tables if you already know the feature you want but forgot which syntax to use. Of course, an even quicker reference is to paste your regex into RegexBuddy, select the application you’re working with, and click on the part of the regex you don’t understand. RegexBuddy then selects the corresponding node in its regex tree which summarizes exactly what the syntax you clicked on does in your regex. If you need more information, press F1 or click the Explain Token button to open the relevant page in the regex tutorial in RegexBuddy’s help file.

    Read the article

  • Use Microsoft PowerPivot to Access Salesforce.com Through the OData Connector

    - by dataintegration
    This article will explain how to connect to any of our OData Connectors with Microsoft Excel's PowerPivot business intelligence tool. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and PowerPivot for Excel from Microsoft. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Excel and select the PowerPivot tab at the top of the window. Step 4: Here you will click on the button labeled PowerPivot Window at the top left. Step 5: A new pop up will appear. Now select the option "From Data Feeds". Step 6: In the resulting Table Import Wizard you will enter the OData URL of the Salesforce Connector. You can find this by clicking on the Settings tab of the Salesforce Connector. It will look something like this: http://localhost:8181/sfconnector/data/conn/odata.rsc. You will also need to add authentication options in this step. To do this, click on the Advanced button and scroll down to the Security section of the resulting pop up window. Change the Integrated Security option to "Basic". You will also need to enter the User ID and Password of the user who has access to the Salesforce Connector. Step 7: When the connection to the Salesforce Connector is successful, click the Next button at the bottom of the window. Step 8: A table listing of the available tables will appear in the next window of the wizard. Here you will select which tables you want to import and click Finish. Step 9: If the import was successful, click Close and you are done! Your data is now in PowerPivot.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • ASP.NET4.0-Compatibility Settings for rendering controls

    - by Jalpesh P. Vadgama
    With asp.net 4.0 Microsoft has taken a great step for rendering controls. Now it will have more cleaner html there are lots of enhancement for rendering html controls in asp.net 4.0 now all controls like Menu, List View and other controls renders more cleaner html. But recently i have faced strange problem in rendering controls I have my site in asp.net 3.5 and i want to convert it in asp.net 4.0. I have applied my style as per 3.5 rendering and some of items are obsolete in asp.net 4.0. Modifying style sheet was a tedious job here asp.net 4.0 compatibility  setting comes into help. Asp.net 4.0 compatibility settings provides full backward compatibility in terms of the rendering controls. You can assign this in your web.config section like following. XML, using GeSHi 1.0.8.6<system.web> <pages controlRenderingCompatibilityVersion="3.5|4.0"/> </system.web>  Parsed in 0.001 seconds at 84.92 KB/s Here the values of controlRenderingCompatibility is a string which will indicate on which way control should render in browser if you provide 4.0 then it will controls with more cleaner html and while if you want to go with old legacy rendering like 3.5 then you can put 3.5 and it will render same way as you are doing in asp.net 3.5. Hope this help you!!! Technorati Tags: ASP.NET 4.0,controlRenderingCompatibility

    Read the article

  • GoodFil.ms Suggests New Movies Based on Friends’ Picks

    - by Jason Fitzpatrick
    Goodfil.ms is a movie suggestion engine that doesn’t suggest movies based on what the critics say or how many anonymous internet points a movie has received, but instead takes into account your personal tastes and the tastes of your friends. From the Goodfil.ms FAQ: Films are social. The best way to find movies is through the people you know. We’ve designed Goodfilms from the ground up to show you what your existing friends are watching and rating, and to focus on showing you what the people around you think about films instead of a random grab bag of “internet voters” or highly specialised critics. Their FAQ file is filled with links to detailed posts about the specifics of the process, so if you’re the curious type we strongly suggest checking it out. In addition to the social-ranking side of Goodfil.ms there’s an excellent “Recent Releases” section for major streaming services like iTunes, Netflix, and Amazon Prime–even if you don’t sign up for the social side of the site you can still keep an eye on the best new releases across the board. What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Questions, Knowledge Checks and Assessments

    - by ted.henson
    Questions should be used to reinforce concepts throughout the title. You have the option to include questions in the course, in assessments, in Knowledge Checks, or in any combination. Questions are required for creating knowledge checks and assessments. It is important to remember that questions that are not in assessments are not tracked. Be sure to structure your outline so that questions are added to the appropriate assignable unit. I usually recommend that questions appear directly below their relative section. This serves two purposes. First, it helps ensure that the related content and question stay relative to one another. Secondly, it ensures that when the "link to subject" option is used it will relate back to the relative content. Knowledge checks are created using the questions that have been added to the related assignable unit. Use Knowledge Checks to give users an additional opportunity to review what they have learned. Knowledge Check allows users to check their own knowledge without being tracked or scored. Many users like having this self check option, especially if they know they are going to be tested later. Each assignable unit can have its own Knowledge Check. Assessments provide a way to measure knowledge or understanding of the course material. The results of each assessment are scored and tracked. Assessments are created using the questions that have been added to the relative assignable unit(s). Each assignable unit, including the Title AU, can have multiple assessments. Consider how your knowledge paths will be structured when planning your assessments. For instance, you can create a multiple-activity knowledge path, with multiple assessments from the same title or assignable unit. Also remember, in Manager an assessment can be either a pre or post assessment. Pre-assessments allow the student to discover what is already known in a specific topic or subject and important if the personal course feature is being used. Post-assessments allow you test the student knowledge or understanding after completing the material.

    Read the article

  • How do i make an AJAX block crawlable?

    - by Vikas Gulati
    I have a block with a few tabs. When the user clicks the tab the content of that block get loaded. Now I would like to make it crawlable by the search engines and at the same time I want to maintain the good user-experience. I figured out a couple of alternative but each one has its own shortcomings. The approached that i could come up with. Use hashbangs and then use this. But hashbangs are not good and things of past now. Secondly it will make my content crawlable by only googlebot as yahoo and bing dont support this. Use GET PARAMETERIZED fallback incase when javascript doesn't work. This will work for all bots and also would be nice as it would work without javascript. But then this will create duplicates of my page as this block is only a very small section of my page and i have like around 5-6 tabs. So it means that many duplicates! Doing this without AJAX is not an option as it would only increase the page load time as all these blocks have heavy media content in them!

    Read the article

  • dpkg: error: parsing file '/var/lib/dpkg/status' near line 6449

    - by mpole
    Good evening. This problem occured when trying to update using the update manager. A problem occured so I cleared all the cache and tried to update once again this time in terminal, and it spat out: dpkg: error: parsing file '/var/lib/dpkg/status' near line 6449 missing package name After opening up 'status' with gedit and going to line 6449, I found that nothing was on that line but the following was before and after it. This package contains the Mono System.Configuration library for CLI 4.0. Original-Maintainer: Debian Mono Group <[email protected])oth.debian.org> Homepage: http://www.mono-project.com/ <<<<---- LINE 6449 Package: bzip2 Status: install ok installed Priority: optional Section: utils Installed-Size: 160 Maintainer: Ubuntu Developers <[email protected]> <<<<--- LINE 6449 is obviously not on the file, but I can't see whats wrong here? anybody have an idea? Thanks! Edit: I have tried running: sudo apt-get install --fix-missing sudo dpkg --clear-avail But no good...

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Desktop Fun: Beaches Theme Wallpapers

    - by Asian Angel
    Is your vacation still a few weeks or months away? Make the waiting a little easier by adding some of that vacation scenery to your desktop with our Beaches Theme Wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                       For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Run Command Missing in Windows 7? Share High Res Photos using Divvyshot Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • Getting the alternative to the 200-Line Linux Kernel patch to work

    - by Gödel
    Apparently, there is a comparable alternative to the 200-line kernel patch that involves no kernel upgrade. It is presented here and discussed here. However, I am not sure if webupd8's solution (under the section "Use it in Ubuntu") on Ubuntu actually works or not. In particular, one commenter on ./ is saying he's getting an error message. Could anyone post the "correct" method that actually works? Suggested solution: Based on the comments I've read so far, the following seems to work. (1) In /etc/rc.local, add the following lines to above exit 0: mkdir -p /dev/cgroup/cpu mount -t cgroup cgroup /dev/cgroup/cpu -o cpu mkdir -m 0777 /dev/cgroup/cpu/user echo "/usr/local/sbin/cgroup_clean" > /dev/cgroup/cpu/release_agent (2) Create a file named /usr/local/sbin/cgroup_clean with the following content: #!/bin/sh rmdir /dev/cgroup/cpu/$1 (3) In your ~/.bashrc, add: if [ "$PS1" ] ; then mkdir -m 0700 /dev/cgroup/cpu/user/$$ echo $$ > /dev/cgroup/cpu/user/$$/tasks echo "1" > /dev/cgroup/cpu/user/$$/notify_on_release fi (4) (To make sure the execution bit is on) execute sudo chmod +x /usr/local/sbin/cgroup_clean /etc/rc.local (5) Reboot.

    Read the article

  • Silverlight Cream for June 17, 2010 -- #885

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Antoni Dol, Jeff Prosise, David Anson, and John Papa. Shoutouts: Rob Davis has a World Cup Football Stadium tour in Silverlight, Azure, and Bing Maps up: The World Cup Map... cruise around this... tons of features. The Silverlight Team Blog reports that NBC sports is streaming the US Open in Silverlight Adam Kinney announced Expression Studio 4 Launch keynote videos are available From SilverlightCream.com: Data Driven Applications with MVVM Part III: Validation, Bringing the UI Closer Zoltan Arvai's 3rd (and final) part of the Data-Driven MVVM apps is up at SilverlightShow. In this final section he is focusing on validation, and discussion of closer integration to the view. Focus on FocusVisualElement in Silverlight buttons Antoni Dol has a cool post up about the FocusVisualElement, and uses a button to demonstrate how it can be used. Dynamic XAP Discovery with Silverlight MEF Jeff Prosise is discussing Silverlight and MEF ... but better than the normal loading XAP files ... he's doing dynamic discovery of XAP files ... and makes it look easy! Updated analysis of two ways to create a full-size Popup in Silverlight David Anson revisits a prior post with an eye toward Silverlight 4. The feature he's discussing is that you can now hook the Resized event without having browser zoom disabled... and he demonstrates it's use in the code from the old post. Silverlight as a Transmedia Platform (Silverlight TV #33) Jesse Liberty joins John Papa this week with Silverlight TV #33, discussing Transmedia and Silverlight as a Transmedia Platform. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to style this form using CSS ? [closed]

    - by Rafael
    Hi all ,i'm a beginner at CSS and trying to do a NETTUTS , but there's a portion in the webpage that i don't know what exactly to do in CSS to make it look right ... I just can't get this input text boxes, textarea and the button to be aligned like that , and to be honest the tutor isn't doing a great job to clearing stuff out Using alternative and absolute positioning, and setting top and right spacing is kinda no a good idea i think ... I'm trying to align them using FlexBox feature but don't know why those elements are not moving at all ... Here's my HTML & CSS3 code (for chrome) : <section id="getAfreeQuote"> <h2>GET A FREE QUOTE</h2> <form method="post" action="#"> <input type="text" name="yourName" placeholder="YOUR NAME"/> <input type="email" name="yourEmail" placeholder="YOUR EMAIL"/> <textarea name="projectDetails" placeholder="YOUR PROJECT DETAILS."></textarea> <input type="text" name="timeScale" placeholder="YOUR TIMESCALE"/> <button>Submit</button> </form> #getAfreeQuote form { display:-webkit-box; -webkit-box-orient:vertical; height:500px; } #getAfreeQuote input[name="yourName"]{ -webkit-box-ordinal-group:1; } #getAfreeQuote input[name="yourEmail"]{ -webkit-box-ordinal-group:1; } #getAfreeQuote textarea{ -webkit-box-ordinal-group:2; } #getAfreeQuote input[name="timeScale"]{ -webkit-box-ordinal-group:3; } #getAfreeQuote button { -webkit-box-ordinal-group:4; } and the result :

    Read the article

  • New Fusion CRM Webinars for Partners dates and subjects announced

    - by Richard Lefebvre
    New Fusion CRM Weekly webinars dates and subjects have been announced! Visit our microsite to find out the sessions to come and mark them in your agenda. The next session will take place Monday April the 2nd at 3pm GMT / 4pm CET and will address the Fusion CRM Sales Planning  In order to check the complete agenda and see login-details, please visit our dedicated microsite. How to join the dedicated microsite: Click on http://isdportal.oracle.com/isd_html/sf.htm Enter your Email Address in the corresponding field Enter fusion_crm in the “Access URL/Page Token” field Agenda: The list of sessions is published and will be regularly updated in the microsite. Duration: Each session lasts up to 60 minutes Webex: The respective webinar link and session ID are published in the microsite Audio:  The audio call details (telephone numbers by country, call number and password) is indicated in the microsite Slides: For your convenience, a pdf copy of each presentation will be stored in the microsite’s document section. We hope that this series of webcasts will be instrumental to your way of Fusion CRM business success!  For further information please contact me at [email protected]

    Read the article

  • Name disappeared from main menu bar

    - by Anthony Burman
    I have Ubuntu 10.4. I installed a nvidia gigabyte geforce 210 graphic card because the intel graphic card is a disaster. I use a terminal and basic, fiddly adjustments were successfully made to get the window to fit the screen. The new card is a roaring success. Nothing freezes and visuals can be set to Extra. But, from that point on, my main menu bar misbehaved and icons kept disappearing. Logon and logoff usually helped. R-E-I-S-U-B was needed when the actual logoff icon disappeared. My full name, Anthony Burman, appeared in the main menu bar. It was alonside Wanda, the Fish , the Oracle.... My name disappeared and I cannot get it back. It can't be found anywhere in the 'add to panel' section. Indicator applet session also can't bring it back. How do i re-insert my name on the toolbar? Thanks, Ant.

    Read the article

  • SQL SERVER – DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3

    - by Pinal Dave
    This is the third part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In earlier two parts we have seen what is incremental statistics and its simple example. In this blog post we will be discussing about DMV, which will list all the statistics which are enabled for Incremental Updates. SELECT  OBJECT_NAME(sys.stats.OBJECT_ID) AS TableName, sys.columns.name AS ColumnName, sys.stats.name AS StatisticsName FROM   sys.stats INNER JOIN sys.stats_columns ON sys.stats.OBJECT_ID = sys.stats_columns.OBJECT_ID AND sys.stats.stats_id = sys.stats_columns.stats_id INNER JOIN sys.columns ON sys.stats.OBJECT_ID = sys.columns.OBJECT_ID AND sys.stats_columns.column_id = sys.columns.column_id WHERE   sys.stats.is_incremental = 1 If you run above script in the example displayed, in part 1 and part 2 you will get resultset as following. When you execute the above script, it will list all the statistics in your database which are enabled for Incremental Update. The script is very simple and effective. If you have any further improved script, I request you to post in the comment section and I will post that on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Design and Print Your Own Christmas Cards in MS Word, Part 1

    - by Eric Z Goodnight
    Looking for a  little DIY fun this holiday season? Open up familiar tool MS Word and create simple, beautiful Christmas and Holiday cards, and impress your family with your crafting skills. This is the first part of a two part article. In this first section, we’ll tackle design in MS Word. In our second, we’ll cover supplies and proper printing methods to get a great look out of your dusty old inkjet. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • NRF Big Show 2011 -- Part 3

    - by David Dorf
    I'm back from the NRF show having been one of the lucky people who's flight was not canceled. The show was very crowded with a reported 20% increase in attendance and everyone seemed in high spirits. After two years of sluggish retail sales, things are really picking up and it was reflected in everyone's mood. The pop-up Disney Store in the Oracle booth was great and attracted lots of interest in their mobile POS. I know many attendees visited the Disney Store in Times Square to see the entire operation. It's an impressive two-story store that keeps kids engaged. The POS demonstration station, where most of our innovations were demoed, was always crowded. Unfortunately most of the demos used WiFi and the signals from other booths prevented anything from working reliably. Nevertheless, the demo team did an excellent job walking people through the scenarios and explaining how shopping is being impacted by mobile, analytics, and RFID. Big Show Links Disney uncovers its store magic Top 10 Things You Missed at the NRF Big Show 2011 Oracle Retail Stores Innovation Station at NRF Big Show 2011 (video) The buzz of the show was again around mobile solutions. Several companies are creating mobile POS using the iPod Touch, including integrations to Oracle POS for the following retailers: Disney Stores with InfoGain Victoria's Secret with InfoGain Urban Outfitters with Starmount The Gap with Global Bay Keeping with the mobile theme, the NRF release a revised version of their Mobile Blueprint at NRF. It will be posted to the NRF site very soon. The alternate payments section had a major rewrite that provides a great overview and proximity and remote payment technologies. NRF Mobile Blueprint Links New mobile blueprint provides fresh insights NRF Mobile Blueprint 2011 (slides) I hope to do some posts on some of the interesting companies I spoke with in the coming weeks.

    Read the article

  • SQL SERVER – DATEDIFF – Accuracy of Various Dateparts

    - by pinaldave
    I recently received the following question through email and I found it very interesting so I want to share it with you. “Hi Pinal, In SQL statement below the time difference between two given dates is 3 sec, but when checked in terms of Min it says 1 Min (whereas the actual min is 0.05Min) SELECT DATEDIFF(MI,'2011-10-14 02:18:58' , '2011-10-14 02:19:01') AS MIN_DIFF Is this is a BUG in SQL Server ?” Answer is NO. It is not a bug; it is a feature that works like that. Let us understand that in a bit more detail. When you instruct SQL Server to find the time difference in minutes, it just looks at the minute section only and completely ignores hour, second, millisecond, etc. So in terms of difference in minutes, it is indeed 1. The following will also clear how DATEDIFF works: SELECT DATEDIFF(YEAR,'2011-12-31 23:59:59' , '2012-01-01 00:00:00') AS YEAR_DIFF The difference between the above dates is just 1 second, but in terms of year difference it shows 1. If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it to minutes. SELECT DATEDIFF(second,'2011-10-14 02:18:58' , '2011-10-14 02:19:01')/60.0 AS MIN_DIFF Even though the concept is very simple it is always a good idea to refresh it. Please share your related experience with me through your comments. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >