Search Results

Search found 11160 results on 447 pages for 'color mapping'.

Page 35/447 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Synergy Key Mapping

    - by Tauren
    I'm running a Synergy server on Ubuntu and a Synergy+ client on OSX. The server has a standard windows keyboard with shift, ctrl, windows, and alt keys. My MacBookPro has shift, fn, control, alt/option, and command keys. When I press ctrl-c, ctrl-v, etc, the appropriate copy/paste action doesn't happen on the Mac, but it does in Ubuntu. If I'm controlling the mac, and press alt-c, alt-v, then I get the copy/paste action. So I played around with key mapping in synergy.conf and found that the following allows me to do copy/paste with ctrl-c/ctrl-v: section: screens godzilla: mbp.local: ctrl = alt alt = ctrl end Is this all that I need to do? Or are there other mappings that will help as well? The synergy configuration page refers to the following key mappings. What are the equivalent keys for each of these on the Windows keyboard and Mac keyboard? What is a meta or super key? shift = {shift|ctrl|alt|meta|super|none} ctrl = {shift|ctrl|alt|meta|super|none} alt = {shift|ctrl|alt|meta|super|none} meta = {shift|ctrl|alt|meta|super|none} super = {shift|ctrl|alt|meta|super|none} Thanks!

    Read the article

  • Color Printer: Laser vs Inkjet

    - by Mike
    I am about to buy a color printer. I had a B&W Laserjet printer in the past but since then I've used inkjets for decades. I need a printer that can deliver high quality as these photo inkjet printers, but I'm tired of paying for ink that costs $9,000 per gallon (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was thinking about buying a color laser printer, but I'm not sure these printers can deliver the same quality and are worth the investment in terms of toner consumption. I remembered that my old Laserjet printer was able to print 1100 pages per toner cartridge. The inkjet printers I have can print 500 pages per cartridge. Price by price, 2 inkjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What can you guys tell me about quality, toner cost and cost per page for laser or inkjet printer? Is it worth the change? (Keep in mind that an inkjet printer costs $50 and a laser printer costs $200.) Thanks.

    Read the article

  • Synergy Key Mapping

    - by Tauren
    I'm running a Synergy server on Ubuntu and a Synergy+ client on OSX. The server has a standard windows keyboard with shift, ctrl, windows, and alt keys. My MacBookPro has shift, fn, control, alt/option, and command keys. When I press ctrl-c, ctrl-v, etc, the appropriate copy/paste action doesn't happen on the Mac, but it does in Ubuntu. If I'm controlling the mac, and press alt-c, alt-v, then I get the copy/paste action. So I played around with key mapping in synergy.conf and found that the following allows me to do copy/paste with ctrl-c/ctrl-v: section: screens godzilla: mbp.local: ctrl = alt alt = ctrl end Is this all that I need to do? Or are there other mappings that will help as well? The synergy configuration page refers to the following key mappings. What are the equivalent keys for each of these on the Windows keyboard and Mac keyboard? What is a meta or super key? shift = {shift|ctrl|alt|meta|super|none} ctrl = {shift|ctrl|alt|meta|super|none} alt = {shift|ctrl|alt|meta|super|none} meta = {shift|ctrl|alt|meta|super|none} super = {shift|ctrl|alt|meta|super|none} Thanks!

    Read the article

  • Named query not known error trying to call a stored proc using Fluent NHibernate

    - by Hamman359
    I'm working on setting up NHibernate for a project and I have a few queries that, due to their complexity, we will be leaving as stored procedures. I'd like to be able to use NHibernate to call the sprocs, but have run into an error I can't figure out. Since I'm using Fluent NHibernate I'm using mixed mode mapping as recommended here. However, when I run the app I get a "Named query not known: AccountsGetSingle" exception and I can't figure out why. I think I might have a problem with my HBM mapping since I'm not very familiar with using them but I'm not sure. My NHibernate configuration code is: private ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005 .ConnectionString((conn => conn.FromConnectionStringWithKey("CIDB"))) .ShowSql()) .Mappings(m => { m.HbmMappings.AddFromAssemblyOf<Account>(); m.FluentMappings.AddFromAssemblyOf<Account>(); }) .BuildSessionFactory(); } My hbm.xml file is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="AccountsGetSingle"> <return alias="Account" class="Core, Account"></return> exec AccountsGetSingle </sql-query> </hibernate-mapping> And the code where I am calling the sproc looks like this: public Account Get() { return _conversation.Session .GetNamedQuery("AccountsGetSingle") .UniqueResult<Account>(); } Any thoughts or ideas would be appreciated. Thanks.

    Read the article

  • NHibernate SubclassMap gives DuplicateMappingException

    - by stiank81
    I'm using NHibernate to handle my database - with Fluent configuration. I'm not using Automappings. All mappings are written explicitly, and everything is working just fine. Now I wanted to add my first mapping to a subclass, using the SubclassMap, and I run into problems. With the simplest possible setup an Nhibernate DuplicateMappingException is thrown, saying that the subclass is mapped more than once: NHibernate.MappingException : Could not compile the mapping document: (XmlDocument) ---- NHibernate.DuplicateMappingException : Duplicate class/entity mapping MyNamespace.SubPerson I get this with my simple classes written for testing: public class Person { public int Id { get; set; } public string Name { get; set; } } public class SubPerson : Person { public string Foo { get; set; } } With the following mappings: public class PersonMapping : ClassMap<Person> { public PersonMapping() { Not.LazyLoad(); Id(c => c.Id); Map(c => c.Name); } } public class SubPersonMapping : SubclassMap<SubPerson> { public SubPersonMapping() { Not.LazyLoad(); Map(m => m.Foo); } } Any idea why this is happening? If there were automappings involved I guess it might have been caused by the automappings adding a mapping too, but there should be no automapping. I create my database specifying a fluent mapping: private static ISession CreateSession() { var cfg = Fluently.Configure(). Database(SQLiteConfiguration.Standard.ShowSql().UsingFile("unit_test.db")). Mappings(m => m.FluentMappings.AddFromAssemblyOf<SomeClassInTheAssemblyContainingAllMappings>()); var sessionSource = new SessionSource(cfg.BuildConfiguration().Properties, new TestModel()); var session = sessionSource.CreateSession(); _sessionSource.BuildSchema(session); return session; } Again; note that this only happens with SubclassMap. ClassMap's are working just fine!

    Read the article

  • Fluent nhibernate: Enum in composite key gets mapped to int when I need string

    - by Quintin Par
    By default the behaviour of FNH is to map enums to its string in the db. But while mapping an enum as part of a composite key, the property gets mapped as int. e.g. in this case public class Address : Entity { public Address() { } public virtual AddressType Type { get; set; } public virtual User User { get; set; } Where AddresType is of public enum AddressType { PRESENT, COMPANY, PERMANENT } The FNH mapping is as mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); the schema creation of this mapping results in create table address ( Type INTEGER not null, user_id VARCHAR(25) not null, and the hbm as <composite-id mapped="true" unsaved-value="undefined"> <key-property name="Type" type="Company.Core.AddressType, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> Where the AddressType should have be generated as type="FluentNHibernate.Mapping.GenericEnumMapper`1[[Company.Core.AddressType, How do I instruct FNH to mappit as the default string enum generic mapper?

    Read the article

  • With NHibernate, how can I add a child object when updating a parent object?

    - by BMZ
    I have a simple Parent/Child relationship between a Person object and an Address object. The Person object exists in the DB. After doing a Get on the Person, I add a new Address object to the Address sub-object list of the parent, and do some other updates to the Person object. Finally, I do an Update on the Person object. With a SQL trace window, I can see the update to the Person object to the Person table and the Insert of the Address record to the Address table. The issue is that, after the update is performed, the AddressId (primary key on the Address object) is still set to 0, which is what it defaults to when you first initialize the Address object. I have verified that when I do an Add, this value is set correctly. Is this a known issue when trying to add sub-objects as part of an NHibernate UPDATE? Sample code and mapping files are below Thanks <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.Person,BusinessEntities.Wellness" table="Person" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="Personid" column="PersonID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`"/> <property type="int" not-null="true" name="Customerid" column="`CustomerID`" /> <property type="AnsiString" not-null="true" length="9" name="Ssn" column="`SSN`" /> <property type="AnsiString" not-null="true" length="30" name="FirstName" column="`FirstName`" /> <property type="AnsiString" not-null="true" length="35" name="LastName" column="`LastName`" /> <property type="AnsiString" length="1" name="MiddleInitial" column="`MiddleInitial`" /> <property type="DateTime" name="DateOfBirth" column="`DateOfBirth`" /> <bag name="PersonAddresses" inverse="true" lazy="true" cascade="all"> <key column="PersonID" /> <one-to-many class="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" / </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" table="PersonAddress" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="PersonAddressId" column="PersonAddressID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`" /> <property type="AnsiString" not-null="true" length="1" name="AddressTypeid" column="`AddressTypeID`" /> <property type="AnsiString" not-null="true" length="60" name="AddressLine1" column="`AddressLine1`" /> <property type="AnsiString" length="60" name="AddressLine2" column="`AddressLine2`" /> <property type="AnsiString" length="60" name="City" column="`City`" /> <property type="AnsiString" length="2" name="UsStateId" column="`USStateID`" /> <property type="AnsiString" length="5" name="UsPostalCodeId" column="`USPostalCodeID`" /> <many-to-one name="Person" cascade="none" column="PersonID" /> </class> </hibernate-mapping> Person newPerson = new Person(); newPerson.PersonName = "John Doe"; newPerson.SSN = "111111111"; newPerson.CreatedBy = "RJC"; newPerson.CreatedDate = DateTime.Today; personDao.AddPerson(newPerson); Person updatePerson = personDao.GetPerson(newPerson.PersonId); updatePerson.PersonAddresses = new List<PersonAddress>(); PersonAddress addr = new PersonAddress(); addr.AddressLine1 = "1 Main St"; addr.City = "Boston"; addr.State = "MA"; addr.Zip = "12345"; updatePerson.PersonAddresses.Add(addr); personDao.UpdatePerson(updatePerson); int addressID = updatePerson.PersonAddresses[0].AddressId;

    Read the article

  • using onDelete with Doctrine 2

    - by tamir
    I can't get the onDelete to work in Doctrine2 (with YAML Mapping). I tried this relation in my Product class: oneToOne: category: targetEntity: Category onDelete: CASCADE But that doesn't work.. EDIT: I've set the ON DELETE: CASCADE manually in the database imported the YAML mapping with doctrine:mapping:import, emptied the database updated it from the schema with doctrine:schema:update and got no ON DELETE in the foreign key.. so looks like even Doctrine doesn't know how to do it lol..

    Read the article

  • How do i generate hubernate models for java dotCMS plugins

    - by shuxer
    Hi I am trying to create a plugin for dotCMS. My plugin stores data in database. i defined hibernate mapping file and put in my plugin folder's conf dir and i have no idea how to hibernate generate models based my mapping definition. I am using hello world plugin's mapping file for mysql Any help or comment would be highly appreciated Thanks in advance

    Read the article

  • NHibernate: Dynamically swapping a single domain model between multiple physical data models

    - by Nigel
    Hi In this article Ayende describes how to map a single domain model to multiple physical data models. Is it possible to extend this principle such that the mapping can chosen dynamically? So for example, imagine we had an entity that could be written to the same physical schema in three ways depending on its current status, and lets assume that regardless of status each entity had a unique identifier. One solution would be to represent the entity in its different states with three separate classes: one for each mapping. Then the entity could be loaded and in order to change its state the entity could be mapped to a class representing one of its other states and then saved back to the schema, making use of a different mapping. I was wondering if it is at all possible to have the same entity represented by one class that held a status flag (kind of like a discriminator), and any save to the schema would choose the appropriate mapping based on the value of the status flag. Hopefully that made sense! Many thanks.

    Read the article

  • How to find unmapped properties in a NHibernate mapped class?

    - by haarrrgh
    I just had a NHibernate related problem where I forgot to map one property of a class. A very simplified example: public class MyClass { public virtual int ID { get; set; } public virtual string SomeText { get; set; } public virtual int SomeNumber { get; set; } } ...and the mapping file: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyAssembly" namespace="MyAssembly.MyNamespace"> <class name="MyClass" table="SomeTable"> <property name="ID" /> <property name="SomeText" /> </class> </hibernate-mapping> In this simple example, you can see the problem at once: there is a property named "SomeNumber" in the class, but not in the mapping file. So NHibernate will not map it and it will always be zero. The real class had a lot more properties, so the problem was not as easy to see and it took me quite some time to figure out why SomeNumber always returned zero even though I was 100% sure that the value in the database was != zero. So, here is my question: Is there some simple way to find this out via NHibernate? Like a compiler warning when a class is mapped, but some of its properties are not. Or some query that I can run that shows me unmapped properties in mapped classes...you get the idea. (Plus, it would be nice if I could exclude some legacy columns that I really don't want mapped.)

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • WPF Binding to change fill color of ellipse

    - by user294382
    Probably a simple question but: How do I programmatically change the color of an ellipse that is defined in XAML based on a variable? Everything I've read on binding is based on collections and lists -can't I set it simply (and literally) based on the value of a string variable? string color = "red" color = "#FF0000"

    Read the article

  • Entity Framework 4 mapping fragment error when adding new entity scalar

    - by Jason Morse
    I have an Entity Framework 4 model-first design. I create a first draft of my model in the designer and all was well. I compiled, generated database, etc. Later on I tried to add a string scalar (Nullable = true) to one of my existing entities and I keep getting this type of error when I compile: Error 3004: Problem in mapping fragments starting at line 569: No mapping specified for properties MyEntity.MyValue in Set MyEntities. An Entity with Key (PK) will not round-trip when: Entity is type [MyEntities.MyEntity] I keep having to manually open the EDMX file and correct the XML whenever I add scalars. Ideas on what's going on?

    Read the article

  • UITableViewCell with custom gradient background, with another gradient as highlight color

    - by Rich
    I have a custom UITableViewCell with a custom layout. I wanted a gradient background, so in my UITableViewDelegate cellForRowAtIndexPath: method, I create a CAGradientLayer and add it to the cell's layer with insertSubLayer:atIndex: (using index 0). This works just fine except for two things: Most importantly, I can't figure out how to change to a different gradient color when the row is highlighted. I have tried a couple things, but I'm just not familiar enough with the framework to get it working. Where would be the ideal place to put that code, inside the table delegate or the cell itself? Also, there's a 1px white space in between each cell in the table. I have a background color on the main view, a background color on the table, and a background color on the cell. Is there some kind of padding or spacer by default in a UITableView?

    Read the article

  • No endpoint mapping found for..., using SpringWS, JaxB Marshaller

    - by Saky
    I get this error: No endpoint mapping found for [SaajSoapMessage {http://mycompany/coolservice/specs}ChangePerson] Following is my ws config file: <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping"> <description>An endpoint mapping strategy that looks for @Endpoint and @PayloadRoot annotations.</description> </bean> <bean class="org.springframework.ws.server.endpoint.adapter.MarshallingMethodEndpointAdapter"> <description>Enables the MessageDispatchServlet to invoke methods requiring OXM marshalling.</description> <constructor-arg ref="marshaller"/> </bean> <bean id="marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="contextPaths"> <list> <value>org.company.xml.persons</value> <value>org.company.xml.person_allextensions</value> <value>generated</value> </list> </property> </bean> <bean id="persons" class="com.easy95.springws.wsdl.wsdl11.MultiPrefixWSDL11Definition"> <property name="schemaCollection" ref="schemaCollection"/> <property name="portTypeName" value="persons"/> <property name="locationUri" value="/ws/personnelService/"/> <property name="targetNamespace" value="http://mycompany/coolservice/specs/definitions"/> </bean> <bean id="schemaCollection" class="org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection"> <property name="xsds"> <list> <value>/DataContract/Person-AllExtensions.xsd</value> <value>/DataContract/Person.xsd</value> </list> </property> <property name="inline" value="true"/> </bean> I have then the following files: public interface MarshallingPersonService { public final static String NAMESPACE = "http://mycompany/coolservice/specs"; public final static String CHANGE_PERSON = "ChangePerson"; public RespondPersonType changeEquipment(ChangePersonType request); } and @Endpoint public class PersonEndPoint implements MarshallingPersonService { @PayloadRoot(localPart=CHANGE_PERSON, namespace=NAMESPACE) public RespondPersonType changePerson(ChangePersonType request) { System.out.println("Received a request, is request null? " + (request == null ? "yes" : "no")); return null; } } I am pretty much new to WebServices, and not very comfortable with annotations. I am following a tutorial on setting up jaxb marshaller in springws. I would rather use xml mappings than annotations, although for now I am getting the error message.

    Read the article

  • How to modify IIS handler mapping permissions via Wix or a Custom Action

    - by Finch
    Hi, I'm using Wix to create an installer for a Silverlight application. When I install the application the virtual directory that has been created has the execute permission checked for the *.dll handler mapping (IIS 7 Web site VDir Handler Mappings *.dll Edit Feature Permissions Execute). When I browse to the application it cannot download its satellite assemblies in ClientBin. If I uncheck the execute permission in IIS the handler becomes disabled and the application now works. I don't want to have to do this manually. Does anybody know how to modify the handler mapping permissions in Wix or a Custom Action? Thanks

    Read the article

  • how to set jqgrid cell color at runtime

    - by anil
    Hi, i am populating a jqgrid from database and one of its columns is a color column like red, blue, etc. Can i set the cell color of this column based on the value coming from database at run time? how should i set formatter in this case? i tried like this but do not work var colorFormatter = function(cellvalue, options, rowObject) { var colorElementString = ''; return colorElementString; colModel: [ { name: 'GroupName', index: 'GroupName', width: 200, align: 'left' }, { name: 'Description', index: 'Description', width: 300, align: 'left' }, { name: 'Color', index: 'Color', width: 60, align: 'left', formatter: colorFormatter}],

    Read the article

  • Silverlight 4 Setting background color of textbox on focus

    - by Sean Riedel
    I am trying to set the background color of a Textbox to white on focus using a Style. My enabled Textbox has a Linear Gradient background by default: <Setter Property="Background"> <Setter.Value> <LinearGradientBrush StartPoint="0.5,0" EndPoint="0.5,1"> <GradientStop Color="#e8e8e8" Offset="0.0" /> <GradientStop Color="#f3f3f3" Offset="0.25" /> <GradientStop Color="#f4f4f4" Offset="0.75" /> <GradientStop Color="#f4f4f4" Offset="1.0" /> </LinearGradientBrush> </Setter.Value> </Setter> I have a focus visual state defined: <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"> <Storyboard> <DoubleAnimation Storyboard.TargetName="FocusVisualElement" Storyboard.TargetProperty="Opacity" To="1" Duration="0"/> </Storyboard> </VisualState> Here is the rest of the Control Template: <Border x:Name="Border" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="1" Opacity="1" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}"> <Grid> <Border x:Name="ReadOnlyVisualElement" Opacity="0" Background="#5EC9C9C9"/> <Border x:Name="MouseOverBorder" BorderThickness="1" BorderBrush="Transparent"> <ScrollViewer x:Name="ContentElement" Padding="{TemplateBinding Padding}" BorderThickness="0" IsTabStop="False"/> </Border> </Grid> </Border> <Border x:Name="DisabledVisualElement" Background="#A5D7D7D7" BorderBrush="#A5D7D7D7" BorderThickness="{TemplateBinding BorderThickness}" Opacity="0" IsHitTestVisible="False"/> <Border x:Name="FocusVisualElement" Background="#A5ffffff" BorderBrush="#FF72c1ec" BorderThickness="{TemplateBinding BorderThickness}" Margin="1" Opacity="0" IsHitTestVisible="False"/> On the last Border tag is where I am trying to set the background to white (#ffffff). Right now I have the opacity at A5 which does turn the background white, however it also starts to obscure the text in the box. Any higher opacity makes the text invisible so I am pretty sure that setting the background of that border is not the right way to do this. Can I set the Background color of the ContentElement somehow through a StoryBoard? Thanks.

    Read the article

  • UIImage color changing?

    - by senthilmuthu
    hi, how can i change the UIImage's Color through Programming,any help pls?if i send UIImage, its color must be changed ..any help please? if i change rgb color like the following through bitmaphandling it did not work.i have given blackcolor's RGB value like 0,0,0.....

    Read the article

  • Qt: Background color on QStandardItem using StyleSheet

    - by Johan
    Hi I have a class that inherits QStandardItem and I put the elements in a QTreeWidget. The class receive notifications from the outside and I want to change the background color of the item based on what happened. If I do not use stylesheets it works just fine, like this: void myClass::onExternalEvent() { setBackground(0, QColor(255,0,0))); } However as soon as I put a stylesheet on the QTreeWidget this has no effect, the stylesheet seems to override the setBackground call. So I tried : void myClass::onExternalEvent() { this-setStyleSheet("background-color: red"); } but this is probably all wrong, it changed the color of some other element on my screen, not sure why. So anyone have an idea on how I can alter the background color like with setBackgroundColor but still be able to use stylesheet on my QTreeWidget?

    Read the article

  • Java2d: JPanel set background color not working

    - by Algorist
    I am having the below code. public VizCanvas(){ { this.setBackground(Color.black); this.setSize(400,400); } } It worked fine and displays the panel in black background. But when I implement the paint method, which does nothing, the color changes to default color i.e gray. I tried to set graphics.setColor() but it didn't help. Thank you.

    Read the article

  • Use jQuery to find div by background color

    - by maxsilver
    I'm trying to use jQuery to find the number of divs that are both visible, and have a background color of Green. (Normally I'd just add a class to the div, style it green, and check for that class in jQuery but in this instance, I can't actually change the markup of the page itself in any way) I currently have the visible div part working as : if( // if there are more than one visible div $('div.progressContainer:visible').length > 0 ){ I'd like to throw some kind of "and background color is green" selector in there. // not legit javascript if( // if there are more than one visible div, and its color is green $('div.progressContainer:visible[background-color:green]').length > 0 ){ Is it possible to do this?

    Read the article

  • Calculating the average color between two colors in PHP, using an index number as reference value

    - by Roel Krottje
    Hi all! In PHP, I am trying to calculate the average color (in hex) between to different hex colors.. However, I also need to be able to supply an index number between 0.0 and 1.0. So for example: I have $color1 = "#ffffff" and $color2 = "#0066CC" .. If I would write a function to get the average color and I would supply 0.0 as the index number, the function would need to return "#ffffff". If I would supply 1.0 as the index number, the function would need to return "#0066CC".. However if I would supply 0.2, the function would need to return an average color between the two colors, but still closer to color1 than to color2.. If I would supply index number 0.5, I would get the exact average color of both colors.. I have been trying to accomplish this for several days now but I can't seem to figure it out..! Any help would therefor be greatly appreciated!.. Thanks!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >