Search Results

Search found 11093 results on 444 pages for 'issues'.

Page 298/444 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • The 2010 Life Insurance Conference - Washington, DC

    - by [email protected]
    How ironic to be in Washington, DC on April 15 - TAX DAY! Fortunately, I avoided IRS offices and attended the much more enjoyable 2010 Life Insurance Conference, presented by LIMRA, LOMA SOA and ACLI. This year's conference offered a variety of tracks focused on the Life Industry including Distribution/Marketing Marketing, Administration, Actuarial/Product Development, Regulatory, Reinsurance and Strategic Management. President and CEO of the ACLI, Frank Keating, opened the event by moderating a session titled "Executive Viewpoint on new Opportunities." Guest speakers included Ted Mathas, President and CEO of NY Life, and John Walters, President and CEO of Hartford Life. Both speakers were insightful as they shared the challenges and opportunities each company faces and the key role life insurance companies play in our society and the global economy. There were several key themes that were reiterated in multiple sessions throughout the conference - the economy is on the rebound, optimism is growing, consumer spending is up and an uptick in employment is likely to follow. The threat of a double dip recession has seemed to passed. Good news for our industry, and welcomed by all in attendance. Of special interest to me, given my background, was some research shared by both The Nolan Group and Novarica in separate sessions. Both firms indicate that policy administration upgrades/replacement projects remain a top priority in 2010. Carriers continue to invest in modern technology. Modern ultra-configurable systems enable carriers to switch from a waterfall to an agile project methodology, which often entails a "culture change" within an organization. Other themes heard throughout the two-day event: Virtually all sessions focused on People, Process and Technology! Product innovation, agility and speed to market are as important as ever. Social Networks and Twitter are becoming more popular ways of communicating with both field and dispersed staff. Several sessions focused on the application, new business and underwriting process. Companies continue looking for ways to increase market agility, accelerate speed to market, address cost issues and improve service levels across the process. They recognize the need to ease the way to do business with both producers and consumers. Author and economic futurist Jeff Thredgold presented an entertaining, informative and humorous general session on Wednesday afternoon that focused on the US and global economies, financial markets and retirement outlook. Thredgold did not disappoint anyone with his message! The Thursday morning general session was keynoted by Therese Vaughan (CEO - NAIC) and Thomas Crawford (President of C2 Group). Both speakers gave a poignant view of the recent financial crisis and discussed "Putting the Pieces Back Together." Therese spoke of the recent financial turmoil and likely changes to regulations to the financial services sector. Tom's topics focused on economic recovery and the political environment in Washington, and how that impacts our industry. Next year's event will be April 11-13, 2011 in Las Vegas. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • Changing the default installation path to a newly installed hard disk

    - by mgj
    Hi, I am currently working on a dual-booted PC. I am using Windows XP and Ubuntu 10.04 Lucid Lynx released in April 2010. The allocated partition to Ubuntu that I am making use of has almost exhausted. Current memory allocations on the PC wrt Ubuntu OS looks like this: bodhgaya@pc146724-desktop:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 8.6G 8.0G 113M 99% / none 998M 268K 998M 1% /dev none 1002M 580K 1002M 1% /dev/shm none 1002M 100K 1002M 1% /var/run none 1002M 0 1002M 0% /var/lock none 1002M 0 1002M 0% /lib/init/rw /dev/sda1 25G 16G 9.8G 62% /media/C /dev/sdb1 37G 214M 35G 1% /media/ubuntulinuxstore bodhgaya@pc146724-desktop:~$ cd /tmp I am trying to mount a 40GB(/dev/sdb1 - given below) new hard disk along with my existing Ubuntu system to overcome with hard disk space related issues. I referred to the following tutorial to mount a new hard disk onto the system:- http://www.smorgasbord.net/how-to-in...untu-linux%20/ I was able to successfully mount this hard disk for Ubuntu 0S. I have this new hard disk setup in /media/ubuntulinuxstore directory. The current partition in my system looks like this: bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ sudo fdisk -l [sudo] password for bodhgaya: Disk /dev/sda: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x446eceb5 Device Boot Start End Blocks Id System /dev/sda1 * 2 3264 26210047+ 7 HPFS/NTFS /dev/sda2 3265 4385 9004432+ 83 Linux /dev/sda3 4386 4863 3839535 82 Linux swap / Solaris Disk /dev/sdb: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfa8afa8a Device Boot Start End Blocks Id System /dev/sdb1 1 4862 39053983+ 7 HPFS/NTFS bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ Now, I have a concern wrt the "location" where the new softwares will be installed. Generally softwares are installed via the terminal and by default a fixed path is used to where the post installation set up files can be found (I am talking in context of the drive). This is like the typical case of Windows, where softwares by default are installed in the C: drive. These days people customize their installations to a drive which they find apt to serve their purpose (generally based on availability of hard disk space). I am trying to figure out how to customize the same for Ubuntu. As we all know the most softwares are installed via commands given from the Terminal. My road block is how do I redirect the default path set on the terminal where files get installed to this new hard disk. This if done will help me overcome space constraints I am currently facing wrt the partition on which my Ubuntu is initially installed. I would also by this, save time on not formatting my system and reinstalling Ubuntu and other softwares all over again. Please help me with this, your suggestions are much appreciated.

    Read the article

  • What will be important in Training in 2011?

    - by anders.northeved
      Now that we have started a new year I would like to give you a list of topics I think we will be discussing in training and learning in 2011. Some of the areas we have discussed earlier will still be just as important in 2011: Time-to-knowledge Still one of the most important issues for the training department. Internal content production Related to time-to-knowledge. How do we convert internal knowledge to a format that can be used for teaching others? LMS integration How do we get our existing LMS fully integrated with our other ERP modules like HCM, Order Management, Finance, Payroll etc. Some areas have been discussed before, but we’ll focus more on these in 2011: Combining internal and external training A majority of training departments use a combination of external and internal training. Having the right mix is vital for the quality and efficiency for most training organizations. Certification More rules and regulations means managing all employee certifications is more important than ever. Evolving trends in 2011: Social Learning We have been talking about this for a long time, but 2011 will be the year where we will start using it for real (OK, I also said so last year – but this year I’m right…). Real-life use of SCORM 2004 Again a topic we have talked about for a long time, but we are now actually starting to use it to give learners a better e-learning experience. How do we engage and delight the learner? e-learning makes economical sense, it can be easy to understand, it is convenient – but how do we make it more engaging and delight our learners? How to include more types of training in LMS One of the main focus area of 2011 will be how to manage and measure mobile learning , on-the-job-training and other forms of training in the LMS. Mobile Learning With the ever growing use of smart phones mobile learning will be THE hot topic of 2011 in the training world. New topics we will begin discussing in 2011: What is beyond web 2.0 and social learning? - could it be content verification and personal accreditation? Why gaming will not be the silver bullet for all types of e-learning Many people believe gaming can be used for any kind of training, but the creation is too expensive and time consuming for most applications. Do you agree with these predictions? What are your own predictions? Let me see your comments! (photo: © Marti, photoxpress.com)

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • 10.10 freezing when sftp downloading

    - by aGr
    My ubuntu 10.10 is freezing during downloading from my other computer on sftp. I thought it might be some nautilus issues so I tried it via command line and I got the same thing - after few minutes the whole computer freezes. Mostly the numlock led is blinking (I've heard somewhere that this means a kernel panic), but not in 100% cases. I dunno if that helps but here is a log from /var/log/message in the time that this happened. At least I hope so - it wasn't that big, when it happened before. But this looks quite "errorish", right? (isn't complete - see bottom) Jan 5 17:57:49 tomas-ntb kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 5 17:57:49 tomas-ntb rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="953" x-info="http://www.rsyslog.com"] (re)start Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's groupid changed to 103 Jan 5 17:57:49 tomas-ntb rsyslogd: rsyslogd's userid changed to 101 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Linux version 2.6.35-24-generic (buildd@vernadsky) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 (Ubuntu 2.6.35-24.42-generic 2.6.35.8) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-provided physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dffa8000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffa8000 - 00000000dffb0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffb0000 - 00000000dffbff00 (ACPI data) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dffbff00 - 00000000dfff0000 (ACPI NVS) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000dfff0000 - 00000000e0000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 00000000ff700000 - 0000000100000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000120000000 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] BIOS-e820: 0000000120000000 - 0000000140000000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Notice: NX (Execute Disable) protection cannot be enabled: non-PAE kernel! Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] DMI 2.5 present. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] AMI BIOS detected: BIOS may corrupt low RAM, working around it. Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] last_pfn = 0xdffa8 max_arch_pfn = 0x100000 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] Scanning 0 areas for low memory corruption Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified physical RAM map: Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000010000 - 000000000009e800 (usable) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 000000000009e800 - 00000000000a0000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 00000000000e0000 - 0000000000100000 (reserved) Jan 5 17:57:49 tomas-ntb kernel: [ 0.000000] modified: 0000000000100000 - 00000000dffa8000 (usable) ... to long - for whole and better view click here

    Read the article

  • Reset All Internet Explorer 8 Settings to Fix Stability Problems

    - by Mysticgeek
    If you like to tweak and customize IE with Add-ons and changing settings, sometimes you may have problems with stability. To save time, you can reset all of the IE settings rather than trying to troubleshoot individual areas. Reset IE Settings To reset Internet Explorer Settings, click on Tools then Internet Options. When you reset the settings, you won’t lose personal settings like your homepage, search provider, passwords…etc. The Internet Options screen opens…click on the Advanced tab, then under Reset Internet Explorer settings click on the Reset button. You’ll need to verify that you want to reset all Internet Explorer Settings. If you choose to, you can delete all of your personal settings as well, but it shouldn’t be necessary to fix stability issues. The settings will start to reset, and when it’s finished close out of the message box. For the process to complete you’ll need to restart Internet Explorer. When it restarts you’ll be presented with the Welcome screen where you can go through the setup wizard again. After it’s complete, you should be back in business and can start using IE again. With the new enhancements and features available in Internet Explorer 8, sometimes too much tweaking can cause it to stop working. One area you could start with is troubleshooting IE 8 Add-ons. However, if you don’t want to waste time troubleshooting each potential issue, sometimes it’s just easier to reset things back to how they were originally. Similar Articles Productive Geek Tips Troubleshooting Internet Explorer on Vista Locking Up or Running SlowlyFix Internet Explorer Not Prompting to Choose Save Location in XPDealing With Windows Vista Explorer Screwing Up Auto-Detection of Folder TypesMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPClean Up Past Notification Icons in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • SQL SERVER – Public Training and Private Training – Differences and Similarities

    - by pinaldave
    Earlier this year, I was on Road SQL Server Seminars. I did many SQL Server Performance Trainings and SQL Server Performance Consultations throughout the year but I feel the most rewarding exercise is always the one when instructor learns something from students, too. I was just talking to my wife, Nupur – she manages my logistics and administration related activities – and she pointed out that this year I have done 62% consultations and 38% trainings. I was bit surprised as I thought the numbers would be reversed. Every time I review the year, I think of training done at organizations. Well, I cannot argue with reality, I have done more consultations (some would call them projects) than training. I told my wife that I enjoy consultations more than training. She promptly asked me a question which was not directly related but made me think for long time, and in the end resulted in this blog post. Nupur asked me: what do I enjoy the most, public training or private training? I had a long conversation with her on this subject. I am not going to write long blog post which can change your life here. This is rather a small post condensing my one hour discussion into 200 words. Public Training is fun because… There are lots of different kinds of attendees There are always vivid questions Lots of questions on questions Less interest in theory and more interest in demos Good opportunity of future business Private Training is fun because… There is a focused interest One question is discussed deeply because of existing company issues More interest in “how it happened” concepts – under the hood operations Good connection with attendees This is also a good opportunity of future business Here I will stop my monologue and I want to open up this question to all of you: Question to Attendees - Which one do you enjoy the most – Public Training or Private Training? Question to Trainers - What do you enjoy the most – Public Training or Private Training? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

  • CodeStock 2012 Review: Michael Eaton( @mjeaton ) - 3 Simple Things for Increased Productivity

    3 Simple Things for Increased ProductivitySpeaker: Michael EatonTwitter: @mjeatonBlog: http://mjeaton.net/blog This was the first time I had seen Michael Eaton speak but have hear a lot of really good things about his speaking abilities. Needless to say I was really looking forward to his session. He basically addressed the topic of distractions and how they can decrease or increase your productivity as a developer. He makes the case that in order to become more productive you must block/limit all distractions. For example, he covered his top distractions as a developer. Top Distractions Social Media(Twitter, Reddit, Facebook) Wiki sites Phone Email Video Games Coworkers, Friends, Family Michael stated that he uses various types of music to help him block out these distractions in order for him to get into his coding zone. While he states that music works for him, he also notes that he knows of others that cannot really work with music. I have to say I am in the latter group because I require a quiet environment in order to work. A few session attendees also recommended listening to really loud white noise or music in another language other than your own. This allows for less focus to be placed on words being sung compared to the rhythmic beats being played. I have to say that I have not tried these suggestions yet but will in the near future. However, distractions can be very beneficial to productivity in that they give your mind a chance to relax and not think about the issues at hand. He spoke highly of taking vacations, and setting boundaries at work so that develops prevent the problem of burnout. One way he suggested that developer’s combat distractions is to use the Pomodoro technique. In his example he selects one task to do for 20 minutes and he can only do that task during that time. He ignores all other distractions until this task or time limit is complete. After it is completed he allows himself to relax and distract himself for another 5- 10 minutes before his next Pomodoro. This allows him to stay completely focused on a task and when the time is up he can then focus on other things.

    Read the article

  • First steps into css - aligning data insite one DIV [on hold]

    - by Andrew
    I am trying to move away from tables, and start doing CSS. Here is my HTML code that I currently trying to place into a nice looking container. <div> <div> <h2>ID: 4000 | SSN#: 4545</h2> </div> <div> <img src="./images/tenant/unknown.png"> </div> <div> <h3>Names Used</h3> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> </div> <div> <h3>Phones Used</h3> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> </div> <div> <h3>Addresses Used</h3> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> </div> </div> I now understand now I create classes and assign classes to elements. I have no issues doing colors. But I am very confused with elements alignments. Could you suggest a nice way to pack it together with some CSS which I can analyze and take as a CSS starting learning point?

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Fluid VS Responsive Website Development Questions

    - by Aditya P
    As I understand these form the basis for targeting a wide array of devices based on the browser size, given it would be a time consuming to generate different layouts targeting different/specific devices and their resolutions. Questions: Firstly right to the jargon, is there any actual difference between the two or do they mean the same? Is it safe to classify the current development mainly a html5/css3 based one? What popular frameworks are available to easily implement this? What testing methods used in this regard? What are the most common compatibility issues in terms of different browser types? I understand there are methods like this http://css-tricks.com/resolution-specific-stylesheets/ which does this come under?. Are there any external browser detection methods besides the API calls specific to the browser that are employed in this regard? Points of interest [Prior Research before asking these questions] Why shouldn't "responsive" web design be a consideration? Responsive Web Design Tips, Best Practices and Dynamic Image Scaling Techniques A recent list of tutorials 30 Responsive Web Design and Development Tutorials by Eric Shafer on May 14, 2012 Update Ive been reading that the basic point of designing content for different layouts to facilitate a responsive web design is to present the most relevant information. now obviously between the smallest screen width and the highest we are missing out on design elements. I gather from here http://flashsolver.com/2012/03/24/5-top-commercial-responsive-web-designs/ The top of the line design layouts (widths) are desktop layout (980px) tablet layout (768px) smartphone layout – landscape (480px) smartphone layout – portrait (320px) Also we have a popular responsive website testing site http://resizemybrowser.com/ which lists different screen resolutions. I've also come across this while trying to find out the optimal highest layout size to account for http://stackoverflow.com/questions/10538599/default-web-page-width-1024px-or-980px which brings to light seemingly that 1366x768 is a popular web resolution. Is it safe to assume that just accounting for proper scaling from width 980px onwards to the maximum size would be sufficient to accommodate this? given we aren't presenting any new information for the new size. Does it make sense to have additional information ( which conflicts with purpose of responsive web design) to utilize the top size and beyond?

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Avoiding Flicker with JQuery Tabs

    - by Damon
    I am a huge fan of JQuery because it seems like every time I want to do something it has a plugin that already does it.  Adding a tabbed interface to a web page was always quite an annoyance, but JQuery UI offers a pretty descent tabs solution (click here to see it).  If you read through the documentation, you'll find that you can create a tabbed interface by calling the tabs() method on an element containing an unordered list.  The only problem that I've experienced with the method is that on slower machines you can see the unordered list render out in its original state before being updated into the final tabbed interface.  A quick way to fix that issues is to set the CSS display property of the element to none, then call the show() method directly after calling the tabs() method.  This keeps the element completely hidden while JQuery sets up the tabs interface and eliminates the flicker. <SCRIPT type="text/javascript">      $(function()      {           $("#tabs").tabs();           $("#tabs").show();      }); </SCRIPT> <div id="tabs" style="display:none;">     <ul>         <li><a href="#tabs-1">First Tab</a></li>         <li><a href="#tabs-2">Second Tab</a></li>         <li><a href="#tabs-3">Third Tab</a></li>     </ul>     ... </div>

    Read the article

  • Live CD / Live USB much faster than full install

    - by user29347
    I've observed it on both laptops I own! HP Compaq nx6125 and Ubuntu 11.04 x64 - somewhat solved Lenovo Thinkpad T500 and Ubuntu 11.10 x64 - help needed! I'm still struggling with the Thinkpad to get performance level similar to that of 10 y.o. laptops... All in all a really serious issue with multiple versions of Ubuntu that renders computers with perfectly compatible hardware unusable, as far as out of the box experience is concerned. Troubleshooting resultant issues seems to be a hard case even for users with some experience with installing graphics drivers. EDIT: I can't really post additional details. Two different ubuntu versions, two laptops, two different set of graph. drivers (OS vs ATI prop.) - all with the same symptoms. Also I can't stress enough how massive the performance degradation is compared to a healthy system. For that reason I ask for input from people who may know roughly what are we dealing with here. I can post more details if we were to focus on my current Thinkpad T500. In that case my current system details: Lenovo Thinkpad T500 Ubuntu 11.10 x64 ATI Mobility Radeon HD 3650 (also see the "What I have already tried" section about Intel graphics tested) ATI Catalyst 11.10 drivers OCZ Agility 3 SSD but! same with the default driver for ATI the card same with the prop. driver for the ATI card from Jockey (Additional drivers applet) What I have already tried: 0. Switching to Intel integrated card (Intel GMA 4500M HD) with the default driver - same effects = may indicate not driver related problem but a problem with something of global influence like e.g. nomodeset or other I don't even know about. (What you can read above) ATI Catalyst 11.10 and radeon.modeset=0 boot parameter + disabled Wait for VBlank. Unity 2D Ubuntu 10.04 LTS tested (ubuntu-10.04.3-desktop-i386.iso): Both live USB and installed version blazing fast! (on the default drivers - without even installing the proprietary fglrx drivers). re2 a) seems to give me the only significant results (still poor) - perfect Unity elements performance with the same crawling stuttering/lagging when dragging windows around. re2 b) this happens often http://i17.photobucket.com/albums/b68/Bucic/ubuntuforumsorg/Screenshotat2011-10-28083140.png re2 c) Sometimes I am able to witness a normal performance when dragging a window around but only for a second or two. When I try to shake it longer it starts to lag and it will keep lagging like that with an increased probability of what you see in the sshot in point re2 b). re2 d) I can't establish the radeon.modeset=0 influence though. Once it seems to work be smooth with it, the other time - without it. Really can't tell.

    Read the article

  • The winning combination: Oracle VM Server for x86 + Oracle Sun Fire HW

    - by Karim Berrah
    You might be wondering why OVM Server for x86 (OVM/x86 here and below) should be seriously considered as a nice (business point of view) alternative to standard Hypervisors, if you are virtualizing Oracle Software, especially if you are planning to move to Oracle x86 Hardware (rackmount or blades). Well, let see some "not well known" facts that might interest you and help you in saving more money for your entire company (and not only the Virtulization team). Fact 1: OVM/x86 is considered as a hard partitionning technology (check page 2 of Oracle Server Partitionning Licencing Policies), so if you are buying new servers based on the latest INTEL Xeon E7 CPUs (10 cores per Socket) and have some licencing issues in deploying further Oracle SW, because you are using a hypervisor not recognized as a hard partitionning technology (like VMware), then you need to check here how to do it with OVM . This might help you to continue to deploy your Oracle DB instances on new x86 HW (12 cores, 40 cores, 64 cores servers) in a reasonable way, without having to pay licences for 12 CPU, 40 CPUs or 64 CPUs. You might also consider migrating your legacy Oracle DB DBs to a virtualized environment like OVM/x86 an recover some CPU licences, that can be reused somewhere else in production. Fact 2: OVM/x86 is free to use, without any extra licence for any specific feature (LiveMigration, High Availability, Embedded Management Console). If you want to use it on non Oracle HW, there is a support fee per  system and per year, that is much below VMware support (Oracle VM Premier Limited Support for systems up to 2 CPUs, and Oracle VM Premier Support for any bigger system, independently on the number of populated sockets). Fact 3: support is included with your Oracle x86 HW support (OPS for systems)  and you can re-install on you system Oracle Linux, Oracle Solaris or Oracle VM server for x86, without beeing charged, an keeping the same support level. Fact 4: it is less expensive to virtualize Oracle Linux or Oracle Solaris on OVM/x86 with Oracle HW that any other similar solution with VMware, because all the VMs are then supported and licenced when you buy Oracle HW with OPS. Fact 5: Oracle VM Templates bring you many Virtual Machines already installed, patched and optimized for various Oracle applications. And to be more specific, those templates are fully supported by Oracle, which is not really true when it comes to another hypervisor. By optimized VM Kernel, I mean PV drivers, OVM-ready kernels in the VM, single source clock for all the VMS, better memory management of the VM ... Fact 6: there is no extra costs for a management console. OVM comes with a free OVM Manager package for Linux.  More infos: Latest announcement of OVM/x86 update 2.2.2 A short flash demo of OVM server for x86 A short flash demo on OVM Templates and Virtual Assembly Builder Oracle Linux Support and Oracle VM Support Global Price List  ISVs: Benefits for Independant Sofwtare Vendors (ISVs) in using OVM/x86 Consultant Services: Advanced Customer Services for OVM/x86  Technical Features Best practices and Guideline for OVM with Oracle Blades Reduce TCO and get more Value from your x86 Infrastructure

    Read the article

  • Apache2 "pseudo" doc root

    - by Brent
    I have several folders in my /www folder that contain various applications. To keep things organized, I keep them in their own folders -- this includes my base application. Examples: phpmyadmin = /www/phpmyadmin phpvirtualbox = /www/phpvirtualbox root domain site = /www/Landing The reason I segregate all of my sites is that I actively develop on some of these (my root site) and when I publish via Visual Studio, I choose to delete prior to upload - if I put the Landing page in the base folder, it would be devastating for me. My goal is that when I go to www.example.com - I go to my page. If I go to www.example.com/phpmyadmin, it does not work because of this in the Apache2 folder: <Location "/"> # Error is the "/" Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> If I change the location to say "/Other", then the base site is broken, and the aliases are restored for the other sites. If it is "/", then the base site works and no aliases work. What could I do to allow it to treat my /www/Landing as my webroot, but when I go to an alias, it GOES to the alias. Edit: Added in the default VirtualHost info. DocumentRoot /var/www <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ExpiresActive On ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/jpg A2592000 ExpiresByType image/jpeg A2592000 ExpiresByType text/css "access plus 1 days" MonoServerPath domain "/usr/bin/mod-mono-server4" MonoDebug domain true MonoSetEnv domain MONO_IOMAP=all MonoApplications domain "/:/var/www/Landing" RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule (.*) /Landing/$1 [L] #Need to watch what the Location is set to. Can cause issues for alias <Location "/"> Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • SharePoint, HTTP Modules, and Page Validation

    - by Damon Armstrong
    Sometimes I really believe that SharePoint actively thwarts my attempts to get it to do what I want.  First you look at something and say, wow, that should work.  Then you realize it doesn’t.  Then you have an epiphany and see a workaround.  And when you almost have that work around working… well then SharePoint says no again.  Then it’s off on another whirl-wind adventure to find a work around for the workaround.  I had one of those issues today, but I think I finally got past the last roadblock. So, I was writing an HTTP module as a workaround for another problem.  Everything looked like it was working great because I had been slowly adding code into the HTTP module bit by bit in a prototyping effort.  Finally I put in the last bit of code in place… and I started to get an error: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” This is not an uncommon error – it normally occurs when you are updating an item on a GET request and you have not marked the web containing the item with AllowUnsafeUpdates.  One issue, however, is that I wasn’t updating anything in my code.  I was, however, getting an SPWeb object so I decided to set the AllowUnsafeUpdates property on it to true for good measure. Once that was in place, I ran it again… “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” WTF?!?!  I really expected that setting the AllowUnsafeUpdates property on the SPWeb would fix the issue, but clearly that was not the case.  I have had occasion to disassemble some SharePoint code with .NET Reflector in the past, and one of the things SharePoint abuses a bit more than it should is the HttpContext.  One way to avoid this abuse is to clear out the HttpContext while your code runs and then set it back once you are done.  I tried this next, and everything worked out just like I had expected.  So, if you are building an HTTP Module for SharePoint and some code that you are running ends up giving you a security validation error, remember to try running that code with AllowUnsafeUpdates turned on and try running the code with the HttpContext nulled out (just remember to set it back after your code runs or else you’ll really jack things up).

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • TFS Rant *WARNING* negative opinions are being expressed.

    - by ryanabr
    It has happened several times now where I end up installing TFS "over the shoulder" of the system admin guy whose job it will be to "own" the server when I am gone. TFS is challenging enough to stand up when doing it myself on a completely open platform, but at these locations, networks are locked down, machines are locked down, and the unexpected always seems to pop up. I personally have the tolerance for these things as a software developer, but as we are installing I have to listen to all of 'colorful' remarks being made about: "why is it like this" or "this is a piece of crap". Generally the issues center around SharePoint integration. TFS on it's own is straightforward, but the last flavor in everyone's mouth is the SharePoint piece. As a product I like SharePoint, but installation is a nightmare. In this particular case, we are going to use WSS since the customer would like this separate from their corp SharePoint 2010 installations since there dev team is really small (1 developer) and it is being used as a VSS replacement, more than a full blown ALM tool. The server where it is being installed as a Cisco Security Agent on it that seems to block 'suspicious' activity, and as far as I can tell is preventing WSS from installing properly. The most confounding thing we can find no meaningful log entries to help diagnose the issue. it didn't help matters that when we tried to contact Microsoft for support, because we mentioned TFS in the list of things that we were trying to install, that after waiting 2 hours that we got a TFS support person NOT the SharePoint person that we really needed, so after another 2 hours the SharePoint support that we did get managed to corrupt the registry sufficiently with his 'tools' that we ended up starting over from scratch the next day anyway after going home at midnight. My point to this is: The System Administrator who is going to own this, now thinks it is a piece of crap because SharePoint wouldn't install properly. Perception is everything.  Everyone today is conditioned that software installs and works in a very simple matter. When looking at the different options to install TFS with the different "modes" there is inconsistency in the information being presented which leads to choices that causes headaches and this bad perception before the product is even installed. I am highlighting this because I love TFS as a product, but I HATE installing it, and would like it to install as simply and elegantly as the product operates once it is installed.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >