Search Results

Search found 5915 results on 237 pages for 'practices'.

Page 225/237 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Silverlight 4 WCF RIA Services and MVVM is not as simple

    - by Thomas Jaskula
    [Disclaimer: I'm ASP.NET MVC Developer] Hi, I'm looking for some best practices with implementing MVVM pattern with WCF RIA in Silverlight 4. I'm not looking to use MEF of IoC for locating my ViewModels. What I would like to know is how to apply MVVM pattern with Silverlight 4 and WCF RIA. I don't want to use other stuff like Prism or MVVM Light toolkit. I found many examples on Internet showing how it is wonderful to drag and drop a datasource on the view and the job is done (it reminds me about my first VB6 developments). I tried to implement MVVM with WCF RIA and it's not strightforward at all. If I understand, the MVVM should contain all the logic in order to unit test it in isolation but when it comes to combine it with WCF RIA it's another story. I have the following questions. Can I use a generated metadata as model ? It would be easier to use it that if I write all from the scratch. As I saw the only way I could get data is through DomainContext or through direct binding in the view (local ressource). I don't want the direct binding in the view, not testable at all. On the other hand I can't use DomainContext, it doesn't expose any single entity !!! All I have is the EntitySet that I can bind to datagrid. How do I bind a single Entity to the DataForm from the ViewModel ? How do I udpate the model to the database ? How do I navigate from one Entity to a collection of it's itemps. For example if I have a Company Entity I would like to show a DataFrom to update a entite informations and a datagrid to show companies adresses. When saving a form would like to save information to Company and for example an information avout which adress was selected as active. Please help me understand how to do it well. Or maybe I should drop the WCF RIA and to do it with WCF from scratch ? What do you think ?

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • Loading a Page into a jQuery Dialog

    - by Dave
    I tend to follow a fairly "modular" approach to building applications and I recently started working with jQuery. The application I'm working on is going to be fairly large so I'm trying to break pieces out into separate files/modules when possible. One example of this is a "User Settings" dialog. This dialog has a form, a few tabs, and quite a good number of input fields so I want to develop it in a separate HTML file (PHP actually, but it can be considered HTML for the purposes of this example). It's an entire page in and of itself with all the tags you would expect such as: <html><head></head><body></body></html> So I can now develop, what I want to be, the dialog separate from the base application. This dialog has it's own Javascript (A LOT of Javascript, in fact) in the head as well, with jQuery $(document).ready(){} capturing, etc. Everything works flawlessly in isolation. However, when I attempt to load the jQuery modal dialog with the page (inside of the main application page), as one might expect, trouble ensues. Here's a brief, very simple, example of what it looks like: editUserDialog.load ("editUser.php", {id : $('#userList').val(), popup : "true"}, function () { editUserDialog.dialog ("option", "title" , "Edit User"); editUserDialog.dialog ('open'); }); (I'm passing in a "popup" flag to the page so that the page can determine its context -- i.e. as a page or inside of the jQuery dialog). Question 1: When I moved the code from the "head" into "body" (in the editUser.php page) it actually worked for the most part. It seemed that jQuery was calling the $(document).ready() function in the context of the body of the loaded file and not the head. Is this a bad idea? Question 2: Is my process for building this application just totally flawed to begin with? I've scoured the net to attempt to find a "best practices" sort of document to building reasonably large applications using jQuery/PHP without a lot of success so maybe there's something out there someone else is aware of that I've somehow missed. Thanks for bearing with me while I attempt to describe the issues I've encountered and I hope I've accurately described the problem.

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • Web Safe Area (optimal resolution) for web app design

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'web safe area' should I optimize the app layout and design. I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number. My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I tried to Google some design best practices but most still talk about designing around 1024x768 which seems to be quickly disappearing. Any help on this would be greatly appreciated! Thanks.

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • Verify Authenticode signature as being from our company for automatic updater

    - by James Johnston
    I am implementing an automatic update feature and need some advice on how to do this securely using best practices. I would like to use the downloaded file's Authenticode signature to verify that it is safe to run (i.e. originates from our company and hasn't been tampered with). My question is very similar to question #2008519. The bottom-line question: what's the best, most secure way to check Authenticode signatures for an automatic update feature? What fields in the certificate should be checked? Requirements being: (1) check signature is valid, (2) check it's my signature, (3) old clients can still update when my certificate expires and I get a new one. Here's some background information / ideas from my research: I believe this could be broken into two steps: Verify that the signature is valid. I believe this should be easy using WinVerifyTrust as outlined in http://msdn.microsoft.com/en-us/library/aa382384(VS.85).aspx - I don't expect problems here. Verify that the signature corresponds to our company, and not another company. This seems to be a more difficult question to answer: One possibility is to check some of the strings in the signature. Could be obtained via code at MS KB article #323809, but this article doesn't make recommendations on what fields should be checked for this type of application (or any other, for that matter). Question #1072540 also illustrates how to get some certificate info, but again doesn't recommend what fields to actually check. My concern is that the strings might not be the best check: what if another person is able to obtain a certificate with the same name, for example? Or if there's a valid reason for us to change the strings in the future? The person at question #2008519 has a very similar requirement. His need for a "TrustedByUs" function is identical to mine. However, he goes about doing the check by comparing public keys. While this would work in the short-term, it seems like it won't work for an automatic update feature. This is because code signing certificates are only valid for 2 - 3 years max. Therefore, in the future, when we buy a new certificate in 2 years, the old clients wouldn't be able to update any more due to the change in public key.

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • Set maven to use archiva repositories WITHOUT using activeByDefault?

    - by Sam Levin
    I am very close to finally having a working setup with archiva and maven. The last thing that's really boggling me, is how to set up my internal and snapshot repositories - without using a profile which contains activeByDefault set to true. I am using a SUPER super pom - a company-wide pom which contains distributionManagement information for releases. I was thinking that I could specify the repositories in this pom, and configure the authentication settings in settings.xml? Can I use repositories tag without a profile? There should be no "profile" for my internal and snapshot repositories, as they will never change... What I'm trying to steer clear from, is using a "default" profile, which is active all the time. I hear activeByDefault is NOT a best practice and I don't intend to use it. With that said, how should I go about doing this? My internal repo is a mirror of the maven central repo, so I would like to lock down my developers to ONLY use our internal artifact server. Remember - I do NOT want a profile with activeByDefault set to true. I cannot stress this enough! Should I use Maven mirrors? Should I "add" additional repositories? If I take the repositories tag instead of the mirrors tag, will maven force builds to use ONLY my archiva settings, instead of the default maven central? Or is what I seek to accomplish able to be done using only the mirrors tag in maven? I know how to configure repo credentials when using repositories tag, but not with mirrors. How is this done? Is providing credentials for anything in mirrors tags the same as for anything in repositories tags? Am I missing something obvious? I've had it up to here with getting things up and running using maven. I know it will be worthwhile in the end, but it is surely causing me a ton of aggravation and resources seem to be sparse. Either that, or people are content using it however they please without regard to best-practices. Thank you

    Read the article

  • Observer pattern used with decorator pattern

    - by icelated
    I want to make a program that does an order entry system for beverages. ( i will probably do description, cost) I want to use the Decorator pattern and the observer pattern. I made a UML drawing and saved it as a pic for easy viewing. This site wont let me upload as a word doc so i have to upload a pic - i hope its easily viewable.... I need to know if i am doing the UML / design patterns correctly before moving on to the coding part. Beverage is my abstract component class. Espresso, houseblend, darkroast are my concrete subject classes.. I also have a condiment decorator class milk,mocha,soy,whip. would be my observer? because they would be interested in data changes to cost? Now, would the espresso,houseblend etc, be my SUBJECT and the condiments be my observer? My theory is that Cost is a changes and that the condiments need to know the changes? So, subject = esspresso,houseblend,darkroast,etc.. // they hold cost() Observer = milk,mocha,soy,whip? // they hold cost() would be the concrete components and the milk,mocha,soy,whip? would be the decorator! So, following good software engineering practices "design to an interface and not implementation" or "identify things that change from those that dont" would i need a costbehavior interface? If you look at the UML you will see where i am going with this and see if i am implementing observer + Decorator pattern correctly? I think the decorator is correct. since, the pic is not very viewable i will detail the classes here: Beverage class(register observer, remove observer, notify observer, description) these classes are the concrete beverage classes espresso, houseblend,darkroast, decaf(cost,getdescription,setcost,costchanged) interface observer class(update) // cost? interface costbehavior class(cost) // since this changes? condiment decorator class( getdescription) concrete classes that are linked to the 2 interface s and decorator are: milk,mocha,soy,whip(cost,getdescription,update) these are my decorator/ wrapper classes. Thank you.. Is there a way to make this picture bigger?

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • How do I correct "Commit Failed. File xxx is out of date. xxx path not found."

    - by Ryan Taylor
    I have recently run into a particularly sticky issue regarding committing the result of a merge in subversion. Our Subversion server is @ 1.5.0 and my TortoiseSVN client is now @ 1.6.1. I am trying to merge a feature branch back into my trunk. The merge appears to work okay; however, the commit fails with the following error message. Commit failed (details follow): File 'flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' is out of date '/svn/ibis/!svn/wrk/531d459d-80fa-ea46-bfb4-940d79ee6d2e/visualization/trunk/source/flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' path not found You have to update your working copy first. My working trunk is up to date. I have even checked out a new one into a different folder to make sure there wasn't any local cruft messing with the merge. I have done some more research into this and I think part of the problem is user error. I think our problems are: We had some developers committing work with a subversion client before 1.5 and some after. I believe this has the potential to corrupt the merge info. In other branches we have performed partial merges. That is, we did not always perform merges at the root of the branch. This was to facilitate updating Flex and .NET efforts within the same branch. We performed cyclic (reflexive) merges on our branch. This was done because we had multiple parallel branches and we wanted to periodically update our branch with the latest code in trunk. All of these things are explicitly not recommended by the Subversion book/team. We have learned our lesson and now know the best practices. However, we first need to merge and commit our latest branch. What it the best way to correct the problems we are encountering? Would deleting all the merge info in the trunk and branch be a viable solution? No. I have done this but it does not resolve the error that I am getting above.

    Read the article

  • Browser displays page without styles for a short moment (visual glitch)

    - by Pierre
    I have observed that, very infrequently, Internet Explorer (7 or 8, it does not matter) displays our web pages (www.epsitec.ch) a short time without applying the CSS. The layout appears completely broken, with everything displayed sequentially from top to bottom. When the page has finished loading, everything finally gets displayed properly. Our web pages do not use any fancy scripting, just two javascript inclusions for QuantCast and Google Analytics, done at the end of the page. By the way, we already had the issue before adding the QuantCast script. The CSS gets linked in the <head> section: <head> <title>Crésus Comptabilité</title> <link rel="icon" href="/favicon.ico" type="image/x-icon" /> <link rel="shortcut icon" href="http://www.epsitec.ch/favicon.ico" /> <link href="../../style.css" rel="stylesheet" type="text/css" /> ... </head> and then follows static HTML up to the final chunk which includes the JavaScript: ... <div id="account"> <a class="deselect" href="/account/login">Identifiez-vous</a> <script type="text/javascript"> _qoptions={qacct:"..."}; </script> <script type="text/javascript" src="http://edge.quantserve.com/quant.js"> </script> <noscript> <img src="..." style="display: none;" border="0" height="1" width="1"/> </noscript> </div> <div id="contact"> <a href="/support/contact">Contactez-nous</a> </div> <div id="ending"><!-- --></div> </div> <script type="text/javascript"> ... </script> <script type="text/javascript"> var pageTracker = _gat._getTracker("..."); pageTracker._initData(); pageTracker._trackPageview(); </script> </body> As this is a very short visual glitch, I have no idea what provokes it. Worse, I cannot reproduce it and it appears only on seldom occasions. How can I further investigate the cause of the glitch? Are there any best practices I should be aware of?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >