Search Results

Search found 20065 results on 803 pages for 'practice problems'.

Page 3/803 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • When do one give up on programming challenges to look at the solutions?

    - by snowpolar
    Recently, I have been trying to learn programming and improve my ability in writing method level code through practices on websites such as Codingbat.com However in the recent weeks I have been stuck for weeks at the last 2-3 questions of String-2/Array-2 and early String/Array-3 problems. It feels really tempting for me to give up and google online for the solutions, but I'm afraid that by doing so I may end up not improving my ability at all. I wonder if this is common and when faced with such situations how long do 1 wait before giving up to look at the solutions or to continue spending more weeks on trying to solve the problems by yourself? How do 1 really engage in effective deliberate practice to improve programming ability and attain the necessary problem solving techniques? Any formal techniques available to tackle the never seen before problems?

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Dealing with state problems in functional programming

    - by Andrew Martin
    I've learned how to program primarily from an OOP standpoint (like most of us, I'm sure), but I've spent a lot of time trying to learn how to solve problems the functional way. I have a good grasp on how to solve calculational problems with FP, but when it comes to more complicated problems I always find myself reverting to needing mutable objects. For example, if I'm writing a particle simulator, I will want particle "objects" with a mutable position to update. How are inherently "stateful" problems typically solved using functional programming techniques?

    Read the article

  • Great Debugging skills weak problem solving

    - by Mahmoud
    For the 5 years I worked for various companies, I worked in large software like computer vision kits, embedded, games. I found myself very good at debuggins skills, I've even found and fixed bugs in frameworks and I solved them. The problem is that I'm very weak at problem solving. I got interview with Qualcomm, and they said you're fine at software, but you have a limited problem solving, I also had the same results with Google. I'm very bad at solving puzzles and brain teasers. During the interviews I solve all of the software related problems on the blackboard, but when I went to the GM and face math problems and probabilities, I struggle. How can I improve my problem solving skills? Edit Some of the problems: A cake that is cut from anywhere and needs just one cut to halved in equal. I told him cut it horizontally, he said No, consider it as a 2D Problem!. Consider a concenteric 3 circles, each one can get a color, but not matched with the other circle, how many blobs you can make out of those circles ? this was with the GM ( Augmented Reality SDK) Consider a train, an infinite one, and you looked at the window, and there are two cars, one big, and one small, what is the probability of having only a big car, I said 50%, he said, what if that two cars you dont know their length, and you want to get the probability of getting the biggest one, I struggled, didn't solve it... I was really exahusted after long day of interviews prob of having a number divisible by 5 in numbers from 1 to 100.. struggled!! All coding questions I solved them like reverse a string, detect a cycle in a linked list,..etc.

    Read the article

  • Problems cloning a GIT repository (Newbie problems)

    - by Brett Rigby
    Hi there, Trying to set-up GIT Server on my local dev machine and have been following this website so far but am a little stuck when trying to clone a repository. In GIT Bash, here's my output: $ git clone ssh://[email protected]:4837/ssh/home/Administrator/project1.git Initialized empty Git repository in C:/Git/project1/.git/ Permission denied (publickey,keyboard-interactive). fatal: The remote end hung up unexpectedly Any suggestions on why I would be getting a 'Permission denied (publickey,keyboard-interactive)' error? Thanks in advance!

    Read the article

  • Ie7 float problems and hiperlinks problems

    - by Uffo
    Markup <ul class="navigation clearfix"> <li class="navigation-top"></li> <div class="first-holder" style="height:153px;"> <dl class="hold-items clearfix"> <dd class="clearfix with"><a href="http://site.com" title="Protokoll">Protokoll</a></dd> <dd class="with-hover"><a href="http://site.com" title="Mein/e Unternehmen">Mein/e Unternehmen</a></dd> <dd class="with"><a class="face-me" href="http://site.com" title="Erweiterte Suche">Erweiterte Suche</a></dd> <dd class="with"><a href="http://site.com" title="Abmelden">Abmelden</a></dd> </dl> </div><!--[end] /.first-holder--> <li class="navigation-bottom"></li> </ul><!--[end] /.navigation--> Css: .first-holder{height:304px;position:relative;width:178px;overflow:hidden;margin-bottom:0px;padding-bottom: 0px;} .hold-items{top:0px;position:absolute;} .navigation dd.with{line-height:38px;background:url('/images/sprite.png') no-repeat -334px -46px;width:162px;height:38px;padding-bottom:0px;overflow: hidden;} .navigation dd.with a{position:relative;outline:0;display:block;font-weight:bold;color:#3f78c0;padding-left:10px;line-height:38px;} .with-hover{background:url('/images/sprite.png') no-repeat -505px -47px;width:178px;height:38px;line-height:38px;overflow:none;} .with-hover a{position:relative;display:block;font-weight:bold;color:#fff;padding-left:10px} .navigation-top{background:url('/images/sprite.png') no-repeat -694px -46px;width:160px;height:36px;} .navigation-top a{display:block;outline:0;height:20px;padding-top:18px;padding-left:138px;} .navigation-top a span{display:block;background:url('/images/sprite.png') no-repeat -212px -65px;width:8px;height:6px;} .navigation-bottom{background:url('/images/sprite.png') no-repeat -784px -402px;width:160px;height:37px;} .navigation-bottom a{display:block;outline:0;height:20px;padding-top:18px;padding-left:138px;} .navigation-bottom a span{display:block;background:url('/images/sprite.png') no-repeat -212px -74px;width:8px;height:6px;} Also the links, are not clickable, if I click on a link in IE7 it doesn't do the action..it doesn't redirect me to the location. This is how it looks in IE7: http://screencast.com/t/MGY4NjljZjc This is how it look in IE8,Firefox,Chrome and so on http://screencast.com/t/MzhhMDQ1M What I'm doing wrong PS: .navigation-top a span and .navigation-bottom a span I'm using some where else, but that it's ok it works fine.

    Read the article

  • PowerShell in Practice

    This article is taken from the book PowerShell in Practice. As part of a chapter on recent and forthcoming innovations in PowerShell, this excerpt explores what you can do with the virtualization functions in the Hyper=V PowerShell library.

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • All about the Fusion Middleware Best Practice Centers for Applications

    Nishit Rao, Group Product Manager and Markus Zirn, Senior Director, Oracle Fusion Middleware discuss Oracle's Fusion Middlware Best Practice Centers for E-Business Suite, Peoplesoft and Siebel, and how Application Developers can use the how-to guides, blogs and webcasts to learn FMW components and create SOA solutions with their favorite applications.

    Read the article

  • Using CSS3 is a bad practice? [closed]

    - by Qmal
    Possible Duplicate: Should I use HTML5 and/or CSS3 to build my website? I just want to know if it's considered as a "bad practice" to use things like rounded corners, gradients and so on... I understand that there are bots and crawlers that do not process CSS, but they don't need to. And nowadays most people use browsers that can process CSS3 with no problem. So should I make my buttons and shadows and such look pretty with CSS3 or with images?

    Read the article

  • use of Enum with flags in practice?

    - by user576510
    I just have read some stuff on enum today. Use of flags with enum was something interesting and new for me. But often practice and theoretical uses are different. I go through many articles they examples they quoted were good to get the concept but am still wondering in what situations one can use Enums with flag to store multiple values? Will highly appreciate if you please can share your practical experience of using enum with flags.

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Best practice when working with web services that return objects?

    - by Raj
    Hello, I'm currently working with web services that return objects such as a list of files e.g. File array. I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[] If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work. thanks Raj.

    Read the article

  • What is the best practice, point of view of well experienced developers

    - by Damien MIRAS
    My manager pushes me to use his self defined best practices. All of these practices are based on is own assumptions. I disagree with them and I would like to have some feedback of well experienced people about these practices. I would prefer answers from people involved in the maintenance of huge products and people whom have maintained the product for years. Developers with 15+ years of experience are preferred because my manager has that much experience himself. I have 7 years of experience. Here are the practices he wants me to use: never extends classes, use composition and interface instead because extending classes are unmaintainable and difficult to debug. What I think about that Extend when needed, respect "Liskov's Substitution Principle" and you'll never be stuck with a problem, but prefer composition and decoration. I don't know any serious project which has banned inheriting, sometimes it's impossible to not use that, i.e. in a UI framework. Design patterns are just unusable. In PHP, for simple use cases (for example a user needs a web interface to view a database table), his "best practice" is: copy some random php code wich we own, paste and modify it, put html and php code in same file, never use classes in PHP, it doesn't work well for small jobs, and in fact it doesn't work well at all, there is no good tool to work with. Copy & paste PHP code is good practice for maintenance because scripts are independent, if you have a bug somewhere you can fix it without side effects. What I think about that: NEVER EVER COPY code or do it because you have five minutes to deliver something, you will do some refactoring after that. Copy & paste code is a beginners error, if you have errors you'll have the error everywhere any time you have pasted it's a nightmare to maintain. If you repsect the "Open Close Principle" you'll rarely get edge effects, use unit test if you are afraid of that. For small jobs in PHP use at least something you get or write the HTML separately from the PHP code and reuse it any time you need it. Classes in PHP are mature, not as mature as other languages like python or java, but they are usable. There is tools to work with classes in PHP like Zend Studio that work very well. The use of classes or not depends not on the language you use but the programming paradigm you have choosen. I'm a OOP developer, I use PHP5, why do I have to shoot myself in the foot? When you find a simple bug in the code, and you can fix it simply, if you are not working on the code where you have found it, never fix it, even if it takes 5 seconds. He says to me his "best practices" are driven by the fact that he has a lot of experience in maintaining software in production (14 years) and he now knows what works and what doesn't work, what the community says is a fad, and the people advocating such principles as never copy & paste code, are not evolved in maintaining applications. What I think about that: If you find a bug fix it if you can do it quickly inform the people who've touched that code before, check if you have not introduced a new bug, ideally add a unit test for it. I currently work on a web commerce project, which serves 15k unique users per day. The code base has to be maintained and has been maintained this way since 2005. Ideally you include a short description of your position and experience in terms of years effectively maintaining an application which has been in production for real.

    Read the article

  • Advancing Code Review and Unit Testing Practice

    - by Graviton
    As a team lead managing a group of developers with no experience ( and see no need) in code review and unit testing, how can you advance code review and unit testing practice? How are you going to create a way so that code review and unit testing to naturally fit into the developer's flow? One of the resistance of these two areas is that "we are always tight on dateline, so no time for code review and unit testing". Another resistance for code review is that we currently don't know how to do it. Should we review the code upon every check-in, or review the code at a specified date?

    Read the article

  • Integrating application ad-support - best practice

    - by Jarede
    Considering the review that came out in March: Researchers from Purdue University in collaboration with Microsoft claim that third-party advertising in free smartphone apps can be responsible for as much as 65 percent to 75 percent of an app's energy consumption. Is there a best practice for integrating advert support into mobile applications, so as to not drain user battery too much. When you fire up Angry Birds on your Android phone, the researchers found that the core gaming component only consumes about 18 percent of total app energy. The biggest battery suck comes from the software powering third-party ads and analytics accounting for 45 percent of total app energy, according to the study. Has anyone invoked better ways of keeping away from the "3G Tail", as the report puts it. Is it better/possible to download a large set of adverts that are cached for a few hours, and using them to populate your ad space, to avoid constant use of the wifi/3g radios. Are there any best practices for the inclusion of adverts in mobile apps?

    Read the article

  • URL good practice for category sub category?

    - by Seting
    I have developed a application and I need to work for SEO-friendly URL. I have following URL structure: http://localhost:3000/posts/product/testing-with-slug-url-2 and http://localhost:3000/posts/product/testing-with-slug-url-2-4-23 Is this a good practice? If not how can I rewrite it? Ok Ill explain about my applicaiton. My application is based on shopping. if a customer searched for mobiles. it will redirect to url like this http://mydomain.com/cat/mobile-3 3 in the url indicates my database id it is used for further searching After the user reached the mobile page he may need to filter for some brand eg. nokia so my url look lik http://mydomain.com/subcat/nokia-3-2 The integer at the end refers to 3 category id and 2 the brand id My doubt is whether the integer at the end of the url will affect seo ranking.

    Read the article

  • How would you practice concurrency and multi-threading?

    - by Xavier Nodet
    I've been reading about concurrency, multi-threading, and how "the free lunch is over". But I've not yet had the possibility to use MT in my job. I'm thus looking for suggestions about what I could do to get some practice of CPU heavy MT through exercises or participation in some open-source projects. Thanks. Edit: I'm more interested in open-source projects that use MT for CPU-bound tasks, or simply algorithms that are interesting to implement using MT, rather than books or papers about the tools like threads, mutexes and locks...

    Read the article

  • Best practice for managing dynamic HTML modules?

    - by jt0dd
    I've been building web apps that add and remove lots of dynamic content and even structure within the page, and I'm not impressed by the method I'm using to do it. When I want to add a section or module into a position in the interface, I'm storing the html in the code, and I don't like that: if (rank == "moderator") { $("#header").append('<div class="mod_controls">' + // content, using + to implement line breaks '</div>'); } This seems like such a bad programming practice.. There must be a better way. I thought of building a function to convert a JSON structure to html, but it seems like overkill. For the functionality of the apps I'm using: Node.js JS JQuery AJAX Is there some common way to store HTML modules externally for AJAX importation?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >