Search Results

Search found 98173 results on 3927 pages for 'maintaining old code'.

Page 19/3927 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Google Analytics - New & Old Domain Filtering

    - by DotNetStudent
    I have two domains associated with the same hosting account. Domain A was the one I had purchased when I first set up my website, but I didn't like it all that much and it was getting low ranks, and so I bought domain B and 301'd links from domain A to domain B. Now I would like to know if there is any way I can filter page views on Google Analytics based on the domain used to access the website, so that I can know if it is worth to keep the old domain associated with the account. Is there any easy way I can do this?

    Read the article

  • Changing hosts but keep old emails [closed]

    - by LDaniel
    Possible Duplicate: Change host / keep emails OK here's the situation...I'm trying to transfer my domain and email address to a new hosting service and I would like to start using Google's domain apps for my email, etc. My email address is currently on a WebMail type platform and when I move my domain and start using Google domain apps I would also like to keep the old emails and have them imported the the email address on Google. Note: The email address will be the same on both hosts. For example [email protected]. So its keeping the same address between two different systems. I've been Googling how to do this for awhile and all the migrate options that come up don't appear in the the Google inbox setting or domain apps config settings. Any help is greatly appreciated.

    Read the article

  • Should I read Kochan's "Programming in C" (10 years old) [on hold]

    - by notypist
    I plan on learning Obj-C and so I decided to do it by the book and learn C first. The problem: S.G. Kochan's highly acclaimed book "Programming in C" (3rd edition) is almost ten years old now1. I really want to get started and everywhere I looked this seemed like the book for beginners with an assumed zero-knowledge in programming2. So, should I buy it or not? 1: A 4th edition is available to preorder but it's only due in March, 2014! 2: I know some HTML, CSS, and a tiny bit of PHP.

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error. I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • SQL SERVER – Convert Old Syntax of RAISEERROR to THROW

    - by Pinal Dave
    I have been quite a few comments on my Facebook page and here is one of the questions which instantly caught my attention. “We have a legacy application and it has been a long time since we are using SQL Server. Recently we have upgraded to the latest version of SQL Server and we are updating our code as well. Here is the question for you, there are plenty of places we have been using old style RAISEERROR code and now we want to convert it to use THROW. Would you please suggest a sample example for the same.” Very interesting question. THROW was introduced in SQL Server 2012 to handle the error gracefully and return the error message. Let us see quickly two examples of SQL Server 2012 and earlier version. Earlier Version of SQL Server BEGIN TRY SELECT 1/0 END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(2000), @ErrorSeverity INT SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY() RAISERROR (@ErrorMessage, @ErrorSeverity, 1) END CATCH SQL Server 2012 and Latest Version BEGIN TRY SELECT 1/0 END TRY BEGIN CATCH THROW END CATCH That’s it! We are done! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Update to kernel 3.12 seems to fail: uname reports old rc7

    - by carlo
    I currently run Xubuntu 13.10 with kernel 3.12 rc7. Today I tried updating to the latest 3.12 kernel (non-rc), but this seems to fail. When installing the image and headers I see the following error passing by: ... run-parts: executing /etc/kernel/postinst.d/dkms 3.12.0-031200-generic /boot/vmlinuz-3.12.0-031200-generichis Error! The dkms.conf for this module includes a BUILD_EXCLUSIVE directive which does not match this kernel/arch. This indicates that it should not be built. ... After rebooting, when I do uname -r or cat /proc/version it tells me that I'm still running on the old rc7 kernel. Since my microphone wasn't working on my Sony Vaio Pro 13 I did download and install the latest ALSA drivers using the oem-audio-hda-daily-dkms package which seem to fix the problem (with the mic). Maybe this has something to do with it? I also tried removing the package using sudo apt-get purge oem-audio-hda-daily-dkms but no success.

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • redirect an old URL using web.config

    - by Dog
    i'm still very new to using URL rewrites and redirects and i'm having some problems on something i thought was quite simple... i've just rebuilt a website and want to redirect the old URLs to the new ones... for example : http://www.mydomain.com/about.asp?lang=1 should now be http://www.mydomain.com/content.asp?id=100230&title=about&langid=1 unfortunately, everything i've tried is giving me errors or simply does nothing. here is one rule i tried : <rule name="redirectoldabout" enabled="true" stopProcessing="true"> <match url="( .*)" negate="true" /> <conditions> <add input="{HTTP_HOST}" pattern="^mydomain\.com/about\.asp\?lang=1$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/content.asp?id=100230&title=about&langid=1" redirectType="Permanent" /> but i get an error back : HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. any suggestions as to what i'm doing wrong? thanks dog

    Read the article

  • How do I start my career on a 3-year-old degree [on hold]

    - by Gabriel Burns
    I received my bachelor's degree in Com S (second major in math) in December 2011. I didn't have the best GPA (I was excellent at programming projects and had a deep understanding of CS concepts, but school is generally not the best format for displaying my strengths), and my only internship was with a now-defunct startup. After graduation I applied for several jobs, had a fair number of interviews, but never got hired. After a while, I got somewhat discouraged, and though I still said I was looking, and occasionally applied for something, my pace slowed down considerably. I remain convinced that software development is the right path for me, and that I could make a real contribution to someones work force, but I'm at a loss as to how I can convince anyone of this. My major problems are as follows. Lack of professional experience-- a problem for every entry-level programmer, I suppose, but everyone seems to want someone with a couple of years under their belt. Rustiness-- I've not really done any programming in about a year, and since school all I've really done is various programming competitions and puzzles. (codechef, hackerrank, etc.) I need a way to sharpen my skills. Long term unemployment-- while I had a basic fast-food job after I graduated, I've been truly unemployed for about a year now. Furthermore, no one has ever hired me as a programmer, and any potential employer is liable to wonder why. Old References-- my references are all college professors and one supervisor from my internship, none of whom I've had any contact with since I graduated. Confidence-- I have no doubt that I could be a good professional programmer, and make just about any employer glad that they hired me, but I'm aware of my red flags as a candidate, and have a hard time heading confidently into an interview. How can I overcome these problems and keep my career from being over before it starts?

    Read the article

  • Old Lock Retrofitted for Wireless and Key-free Entry

    - by Jason Fitzpatrick
    What do you do if the old key your landlord gave you is poor fit for your apartment’s lock? If you’re the geeky sort, you build a wireless unlocking module to do the work for you. Instructables user Rybitski writes: The key to my apartment never worked quite right because it is a copy of a copy of a copy. I am fairly certain that the dead bolt is original to the building and the property manager seems to have lost the original key years ago. As a result unlocking the door was always a pain. Changing the lock wasn’t an option, but eliminating the need to use a key was. To that end, he built the device seen in the video above. An Arduino Uno drives a servo which in turn opens the deadbolt. The whole thing is controlled by a simple wireless key fob. Hit up the link below for the full build guide including code. Key Fob Deadbolt [via Hack A Day] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • External monitor on old Dell 700m?

    - by Adam Henne
    Okay, linux noob. I installed 10.04 on an old Dell 700m in hopes of using it as a media center device. That seems to work great, except now it won't output to the external monitor. VGA cable works fine, TV recognizes other computers fine, it's the laptop that won't output. Turns out that the 700m, unlike every other computer out there, doesn't have a Fn hotkey to switch monitors. The only way to do it in Windows was to go through the Control Panel. In Ubuntu, though, the Monitor setup won't detect or recognize any monitors. The laptop display works fine, but I can't adjust or add anything. I did a little research and found suggestions for older Ubuntu builds involving the xorg.conf. I tried one out, in spite of my unfamiliarity, and it's screwed the whole display big time. I can fix that with a clean reinstall, that's no problem, but I'm cautious about editing xorg again. Any suggestions? Thank you!

    Read the article

  • Resurrecting a 5,000 line test plan that is a decade old

    - by ale
    I am currently building a test plan for the system I am working on. The plan is 5,000 lines long and about 10 years old. The structure is like this: 1. test title precondition: some W needs to be set up, X needs to be completed action: do some Y postcondition: message saying Z is displayed 2. ... What is this type of testing called ? Is it useful ? It isn't automated.. the tests would have to be handed to some unlucky person to run through and then the results would have to be given to development. It doesn't seem efficient. Is it worth modernising this method of testing (removing tests for removed features, updating tests where different postconditions happen, ...) or would a whole different approach be more appropriate ? We plan to start unit tests but the software requires so much work to actually get 'units' to test - there are no units at present ! Thank you.

    Read the article

  • Using a degrading corrupted hard disk with a brand new one. Is this ok?

    - by EApubs
    My old 500 GB hard drive started to give bad sectors. Its slowly going down. So, I bought a new 1TB Seagate drive. I first attached the 500GB drive as the first primary drive and installed Windows. I want Windows boot loader to be placed in the old drive so it won't conflict with the Linux system. But the actual Windows system (Including the C drive) is placed on my new hard drive. After this, I attached the new drive as the primary and installed Linux. Now if I want to re install windows, I can do it without any issues by simply setting the old drive as the primary. So the Linux system will be untouched. But is it a good idea to set things like this? Will the old degrading drive have an impact on the new one? The old drive is slower than the new one. Won't I be able to get the maximum speed out of the new drive even when its used to install everything (including the OS)? PS : When I ran the Windows Experience Index, I was using the old drive as the primary. Did it got the hard drive ratings from the old drive? What if I run it now with the new drive as the primary?

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • Installing and maintaining an email server

    - by Andrew
    I need to move hosting providers for four or five domains and for several reasons I'm considering a Linux VPS rather than staying with my current shared, managed hosting provider. The only thing that's stopping me is email. I have lots of experience running and maintaining Apache, but none with email servers. Based on some research, if I want to keep what I've using now, it looks like I'd be going with Postfix and Dovecot, and probably Exim and SpamAssassin. I have no problem performing regular maintenance and watching for security updates, but I don't want to bite off more than I can chew. For someone new to email services, how hard is it to set up an email server that is externally accessible (via SMTP and POP3, not IMAP), available over SSL/TLS and reasonably reliable for multiple domains? How much of a time commitment is it to maintain one?

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • Who is code wanderer?

    - by DigiMortal
    In every area of life there are people with some bad habits or misbehaviors that affect the work process. Software development is also not free of this kind of people. Today I will introduce you code wanderer. Who is code wanderer? Code wandering is more like bad habit than serious diagnose. Code wanderers tend to review and “fix” source code in files written by others. When code wanderer has some free moments he starts to open the code files he or she has never seen before and starts making little fixes to these files. Why is code wanderer dangerous? These fixes seem correct and are usually first choice to do when considering nice code. But as changes are made by coder who has no idea about the code he or she “fixes” then “fixing” usually ends up with messing up working code written by others. Often these “fixes” are not found immediately because they doesn’t introduce errors detected by compilers. So these “fixes” find easily way to production environments because there is also very good chance that “fixed” code goes through all tests without any problems. How to stop code wanderer? The first thing is to talk with person and explain him or her why those changes are dangerous. It is also good to establish rules that state clearly why, when and how can somebody change the code written by other people. If this does not work it is possible to isolate this person so he or she can post his or her changes to code repository as patches and somebody reviews those changes before applying them.

    Read the article

  • What would you add to Code Complete 3rd Edition?

    - by Peter Turner
    It's been quite a few years since Code Complete was published. I really love the book, I keep it in the bathroom at the office and read a little out of it once or twice a day. I was just wondering for the sake of wonderment, what kinds of things need to be added to Code Complete 3e, and for the sake of reductionism, what kinds of things would be removed. Also, what languages would you use for code examples?

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >