Search Results

Search found 23004 results on 921 pages for 'internet mapping'.

Page 68/921 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • With NHibernate, how can I add a child object when updating a parent object?

    - by BMZ
    I have a simple Parent/Child relationship between a Person object and an Address object. The Person object exists in the DB. After doing a Get on the Person, I add a new Address object to the Address sub-object list of the parent, and do some other updates to the Person object. Finally, I do an Update on the Person object. With a SQL trace window, I can see the update to the Person object to the Person table and the Insert of the Address record to the Address table. The issue is that, after the update is performed, the AddressId (primary key on the Address object) is still set to 0, which is what it defaults to when you first initialize the Address object. I have verified that when I do an Add, this value is set correctly. Is this a known issue when trying to add sub-objects as part of an NHibernate UPDATE? Sample code and mapping files are below Thanks <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.Person,BusinessEntities.Wellness" table="Person" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="Personid" column="PersonID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`"/> <property type="int" not-null="true" name="Customerid" column="`CustomerID`" /> <property type="AnsiString" not-null="true" length="9" name="Ssn" column="`SSN`" /> <property type="AnsiString" not-null="true" length="30" name="FirstName" column="`FirstName`" /> <property type="AnsiString" not-null="true" length="35" name="LastName" column="`LastName`" /> <property type="AnsiString" length="1" name="MiddleInitial" column="`MiddleInitial`" /> <property type="DateTime" name="DateOfBirth" column="`DateOfBirth`" /> <bag name="PersonAddresses" inverse="true" lazy="true" cascade="all"> <key column="PersonID" /> <one-to-many class="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" / </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" table="PersonAddress" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="PersonAddressId" column="PersonAddressID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`" /> <property type="AnsiString" not-null="true" length="1" name="AddressTypeid" column="`AddressTypeID`" /> <property type="AnsiString" not-null="true" length="60" name="AddressLine1" column="`AddressLine1`" /> <property type="AnsiString" length="60" name="AddressLine2" column="`AddressLine2`" /> <property type="AnsiString" length="60" name="City" column="`City`" /> <property type="AnsiString" length="2" name="UsStateId" column="`USStateID`" /> <property type="AnsiString" length="5" name="UsPostalCodeId" column="`USPostalCodeID`" /> <many-to-one name="Person" cascade="none" column="PersonID" /> </class> </hibernate-mapping> Person newPerson = new Person(); newPerson.PersonName = "John Doe"; newPerson.SSN = "111111111"; newPerson.CreatedBy = "RJC"; newPerson.CreatedDate = DateTime.Today; personDao.AddPerson(newPerson); Person updatePerson = personDao.GetPerson(newPerson.PersonId); updatePerson.PersonAddresses = new List<PersonAddress>(); PersonAddress addr = new PersonAddress(); addr.AddressLine1 = "1 Main St"; addr.City = "Boston"; addr.State = "MA"; addr.Zip = "12345"; updatePerson.PersonAddresses.Add(addr); personDao.UpdatePerson(updatePerson); int addressID = updatePerson.PersonAddresses[0].AddressId;

    Read the article

  • using onDelete with Doctrine 2

    - by tamir
    I can't get the onDelete to work in Doctrine2 (with YAML Mapping). I tried this relation in my Product class: oneToOne: category: targetEntity: Category onDelete: CASCADE But that doesn't work.. EDIT: I've set the ON DELETE: CASCADE manually in the database imported the YAML mapping with doctrine:mapping:import, emptied the database updated it from the schema with doctrine:schema:update and got no ON DELETE in the foreign key.. so looks like even Doctrine doesn't know how to do it lol..

    Read the article

  • How do i generate hubernate models for java dotCMS plugins

    - by shuxer
    Hi I am trying to create a plugin for dotCMS. My plugin stores data in database. i defined hibernate mapping file and put in my plugin folder's conf dir and i have no idea how to hibernate generate models based my mapping definition. I am using hello world plugin's mapping file for mysql Any help or comment would be highly appreciated Thanks in advance

    Read the article

  • NHibernate: Dynamically swapping a single domain model between multiple physical data models

    - by Nigel
    Hi In this article Ayende describes how to map a single domain model to multiple physical data models. Is it possible to extend this principle such that the mapping can chosen dynamically? So for example, imagine we had an entity that could be written to the same physical schema in three ways depending on its current status, and lets assume that regardless of status each entity had a unique identifier. One solution would be to represent the entity in its different states with three separate classes: one for each mapping. Then the entity could be loaded and in order to change its state the entity could be mapped to a class representing one of its other states and then saved back to the schema, making use of a different mapping. I was wondering if it is at all possible to have the same entity represented by one class that held a status flag (kind of like a discriminator), and any save to the schema would choose the appropriate mapping based on the value of the status flag. Hopefully that made sense! Many thanks.

    Read the article

  • How to find unmapped properties in a NHibernate mapped class?

    - by haarrrgh
    I just had a NHibernate related problem where I forgot to map one property of a class. A very simplified example: public class MyClass { public virtual int ID { get; set; } public virtual string SomeText { get; set; } public virtual int SomeNumber { get; set; } } ...and the mapping file: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyAssembly" namespace="MyAssembly.MyNamespace"> <class name="MyClass" table="SomeTable"> <property name="ID" /> <property name="SomeText" /> </class> </hibernate-mapping> In this simple example, you can see the problem at once: there is a property named "SomeNumber" in the class, but not in the mapping file. So NHibernate will not map it and it will always be zero. The real class had a lot more properties, so the problem was not as easy to see and it took me quite some time to figure out why SomeNumber always returned zero even though I was 100% sure that the value in the database was != zero. So, here is my question: Is there some simple way to find this out via NHibernate? Like a compiler warning when a class is mapped, but some of its properties are not. Or some query that I can run that shows me unmapped properties in mapped classes...you get the idea. (Plus, it would be nice if I could exclude some legacy columns that I really don't want mapped.)

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Enjoy Playing Dozens of Classic Atari, Adventure, and Other Types of Games Directly in Your Browser

    - by Akemi Iwaya
    Would you love to play classic Atari games, journey once again with Bilbo Baggins in The Hobbit v1.0, or even try out WordStar 2.26? Then we have the perfect way to indulge in hours of browser-based fun to share with you. The Internet Archive has worked hard to put together a JavaScript port of the MESS computer software emulator and create an awesome online Historical Software Collection of classic games and software from yesteryear! When you visit the homepage, you will be able to scroll down through it for a ‘guided tour’ of the games and software currently available in the initial collection. This is what the individual homepages for each game or bit of software looks like. Keep in mind that none of the ‘Download item’ links we checked were working for us even though they are ‘shown’… Browse on over to the Internet Archive’s Historical Software Collection homepage to start having fun with all the classic games and programs. Historical Software Collection Homepage If you would like to visit the homepage for The Hobbit v1.0 directly, then use the link below. Play The Hobbit v1.0 [via The Verge]     

    Read the article

  • Ask the Readers: Which Web Browser Do You Use?

    - by Mysticgeek
    Yesterday we looked at the Browser Ballot Screen, which offers 12 different browsers as alternatives to IE for European Windows users. This got us thinking about this weeks question. What browser do you use for your daily web navigation?   Yesterday we showed you the Browser Ballot Screen which was introduced in March to Windows users in Europe. While it offers the choice of the most well known browsers on the market, there are some obscure choices as well. This got us thinking about what web browser(s) you use at home, in the office, or even on your mobile devices. Some people might have a favorite browser they use at home but are required to use IE at work due to proprietary applications the company uses. Also, if you use an operating system other than Windows, you might favor Safari, Firefox, Konqueror..etc. What web browser do you use? Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPSet the Default Browser on Ubuntu From the Command LineAnnouncing the How-To Geek ForumsHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for VistaA Few Things I’ve Learned from Writing at How-To Geek TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File

    Read the article

  • Failed to download repository information Check your Internet connection

    - by Luca Brazza
    I need to check if I have updates for Ubuntu. I think it is 11.05 As you can see this is what it says: Failed to download repository information Check your Internet connection. Details: W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Ask the Readers: How Fast is Your Internet Connection?

    - by Mysticgeek
    The federal government recently announced a broadband initiative that calls for 260 million homes to have 100Mbps Internet connections by the year 2020. This got us wondering, how fast is your current Internet connection? Photo by roland When it comes to the speed of our Internet connection, we all want the maximum possible. The FCC recently announced their National Broadband Plan, which is an initiative to improve the Internet infrastructure in the United States and provide higher speeds to everyone. You’ve also undoubtedly heard the news about Google getting into the mix with their program to bring ultra high-speed fiber broadband to 50,000 users in select cities. While we wait for those programs to come into fruition, we thought it would be cool to check out what kinds of speeds you’re getting now. Test Your Internet Connection Speed There are several sites out there you can use to test your Internet speeds, but probably the best site is Speedtest.net. It’s easy to use, and allows you test download and upload speeds to and from various locations in the US and throughout the world. If you already know the speeds you’re getting leave a comment and let us know. If you use Speedtest.com, just keep in mind that our comment system won’t allow you to copy their result links, but you can simply tell us what you get in the results. We’re especially interested in the results of those of you who have Verizon FIOS or Comcast’s “Ultra” service. Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Configure How often Ubuntu checks for Automatic UpdatesMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPNorton Internet Security 2010 [Review]Disable Fast User Switching on Windows XPUnderstanding Vista’s New Network Connection Icons TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties

    Read the article

  • my windows xp sp3 diagnostick:windows could not detect any wired or wireless network cards installed on your machine

    - by Yosef
    Problem: cant connect to internet with my new installation of windows xp sp3. Details: I have ubuntu in pc that worked with wired internet. i format all disk and install windows xp sp3. i have auto internet that defined in my router - other computers have internet. I run diagnoze of ie and get: windows could not detect any wired or wireless network cards installed on your machine In Device Manager i have only 1394 Adapter I dont see any internet adapters. Edit: I find with ubuntu livecd that i have hardware:82566dc gigabit network connection Thanks

    Read the article

  • no disk space, cant use internet

    - by James
    after trying to install drivers using sudo apt-get update && sudo apt-get upgrade, im faced with a message saying no space left on device, i ran disk usage analyzer as root and there was three folders namely, main volume, home folder, and my 116gb hard drive (which is practically empty) yet both other folders are full, which is stopping me installing drivers because of space, how do i get ubuntu to use this space on my hard drive? its causing problems because i cant gain access to the internet as i cant download drivers when i havnt got enough space, this happens every time i try it sudo fdisk -l Disk /dev/sda: 120.0GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003eeed Device Boot Start End Blocks Id System /dev/sda1 * 2048 231315455 115656704 83 Linux /dev/sda2 231317502 234440703 1561601 5 Extended /dev/sda5 231317504 234440703 1561600 82 Linux swap / solaris Output of df -h df: '/media/ubuntu/FB18-ED76': No such file or directory Filesysytem Size Used Avail Use% Mounted on /cow 751M 751M 0 100% / udev 740M 12K 740M 1% /dev tmpfs 151M 792K 150M 1% /run /dev/sr0 794M 794M 0 100% /cdrom /dev/loop0 758M 758M 0 100% /rofs none 4.0K 0 4.0K 0% /sys/fs/cgroup tmpfs 751M 1.4M 749M 1% /tmp none 5.0M 4.0K 5.0M 1% /run/lock none 751M 276K 751M 1% /run/shm none 100M 40K 100M 1% /run/user

    Read the article

  • Getting Xbox Live via a wired network with my laptop that has internet access wirelessly

    - by Alex Franco
    I'm running the latest version (as of yesterday anyways) of Ubuntu Desktop 64bit, but installed on my laptop if it makes a difference. I had Windows 7 preinstalled when i bought it and it worked fine with the wireless from my house and bridging the connection with a LAN to my xbox for Live. Now with Ubuntu I tried the same setup, but I'm unfamiliar with Ubuntu so I didn't get far. Best I got so far is wireless internet on my laptop and a wired connection to the xbox that continually connects and disconnects. Heres my network settings. if theres fields not included its because theyre empty on mine or theyre my MAC address or network password Wireless Network 1 settings: Connect Automatically: Checked. Available to all Users: Checked Wireless: SSID: Franco's Mode: Infrastructure MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Wired Network 1: Connect Automatically: Checked Available to all Users: Checked Wired: MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Any help would be greatly appreciated. EDIT: 6:26pm It seems to be staying connected now. Doing the Network test on my xbox it pickups the network, but cannot detect any PC. Restarting the Xbox, however, leaves my computer unable to connect bringing up the Wire Network disconnected 'blip' every minute or so again. Before I had restarted the Xbox it said "Connected 100 MB/s". Now it only says "connecting". I did have my computer and xbox on in this Wired Network Disconnected blip cycle for a long period of time so it may have finally connected, just without the ability to detect my laptop. I left for 2 hours or so in the middle of typing up the original question. I finished posting this when i got back and then tried to mess with it a bit again, in case youre wondering why i didnt include this before... I've said too much. Forgive my long-winded fingers :p

    Read the article

  • Ubuntu 11.04 does not detect my tata photon +

    - by nikhil
    I am trying to connect to the Internet via my photon+ from the past day and have tried various suggestions on the Internet and have tried to zero in to the main cause of the problem which is that my device is not detected. I have tried the following ways: 1) After connecting the device, I tried to create a mobile broadband connection via System Preferences Network Connections and selected tata photon plus. But there was no connection. 2) I did sudo apt-get install usb-modeswitch-data and sudo apt-get install usb-modeswitch But the prompt said that they are at their newest version. 3) I tried to edit the file sudo gedit /etc/usb_modeswitch.d/12d1:1446, but this file did not exist. 4) my lsusb output is nikhil@nikhil:~$ lsusb Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 003: ID 22f4:0021 Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 04f2:b159 Chicony Electronics Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Any Ideas ??

    Read the article

  • Browsing not working in Windows 8

    - by Jonathan Perry
    I'm using Windows 8 Professional installed on Windows 7 using the "Save my preferences and apps" installation option. The Windows works great, apps are downloading and I can listen to online radio stations using the TuneIn radio app meaning the internet connection is alive, however, when I open a browser (either Chrome or IE10) and try to browse the internet, I'm getting an "Unable to resolve DNS" error message. Prior to installing the internet browsing worked flawlessly I must say. I'm using ESET NOD32 Antivirus so I suspect that it might interfere with the web connection now, but I'm not so sure. Internet options show that the PC is set to resolve the DNS automatically. I don't know what to do. My other Win7 PCs in my wifi home network are connecting to the internet without any issues. If anyone can help me resolve this I'll be grateful :) Thanks

    Read the article

  • Internet Explorer 9 is coming Monday to a web near you

    - by brian_ritchie
    Internet Explorer 9 is finally here...well almost.  Microsoft is releasing their new browser on March 14, 2011. IE9 has a number of improvements, including: Faster, Faster, Faster.  Did I mention it is faster?   With the new browsers coming out from Mozilla, Google, and Microsoft, there have been a flood of speed test coverage.  Chrome has long held the javascript speed crown.  But according to Steven J. Vaughan-Nichols over at ZDNET..."for the moment at least IE9 is actually the fastest browser I’ve tested to date."  He came to this revelation after figuring out that the 32-bit version of IE9 has the new Chakra JIT (the 64-bit version doesn't).  It also has a DirectX-based rendering engine so it can do cool tricks once reserved for desktop applications. Windows 7 Desktop Integration.  Read my post for more details.  Unfortantely, they didn't integrate my ideas...at least not yet :) Hot new UI.  Ok, they "borrowed" some ideas from Chrome...but that is the best form of flattery. Standards Compliance.  A real focus on HTML5 and CSS3.  Definite goodness for developers. So, go get yourself some IE9 on Monday and enjoy! 

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • My internet connection slows or dies unexpectedly

    - by genesis
    I installed Ubuntu 10.04 once again and I'm having some problems which I had before, but I have no idea how I solved them. On Windows, everything's working fine and I had no problems with this. My problem is that sometimes, when browsing through the internet, webpages just start to load really slow, sometimes it doesn't load anything at all (Error 118 (net::ERR_CONNECTION_TIMED_OUT): The operation timed out.) and it starts to work after few minutes. My IPv4 settings are automatic (DHCP), and IPv6 settings are Ignored/Disabled. I think my previous problems had something to do with IPv6, but I'm not sure. Is there a fix for this? iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Fsite1" Mode:Managed Frequency:2.442 GHz Access Point: C8:3A:35:40:43:68 Bit Rate=0 kb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=43/70 Signal level=-67 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0

    Read the article

  • PDF export printing in Internet Explorer [closed]

    - by user619804
    protected static byte[] exportReportToPdf(JasperPrint jasperPrint) throws JRException { JRPdfExporter exporter = new JRPdfExporter(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos); exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); exporter.exportReport(); return baos.toByteArray(); } We are using code like this to export a PDF document from a Jasper application. The line exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); adds JavaScript to send the PDF document directly to the printer. The expected behavior is that a print dialog will come up with a preview of the PDF document. This works fine most of the time - except I am having problems about one out of every 5-6 times in Internet Explorer 8 and Firefox. What happens is - the print preview dialog with the PDF document does not appear or it appears with a blank document in the preview window. -I've tried a number of different JavaScripts (different params to this.print() via exporter.setParameter -I've tried setting different response headers such as response.setContentType("application/pdf"); response.setHeader("Content-disposition","inline; filename=\"" + reportName + "\""); response.setContentLength(baos.size()); these did not seem to help This seems to be an IE and FF issue. Has anyone ever dealt with this problem? I need to get it to work across all browsers 100% of the time. Perhaps a different approach to accomplish the goal of sending the PDF document export directly to the printer? or a third party library that will work across browsers?

    Read the article

  • Lots of Internet browsing issues, all browsers

    - by dario_ramos
    Before the upgrade, everything was working fine. Now, however, I can connect to the Internet but a lot of stuff fails, and the weirdest thing is that it happens with Firefox, Chromium and Opera. Some of the things that fail: I can't log in to Stack Overflow, after entering user/pass it loads for a long time on Firefox and throws Error 408 (browser request timed out) on Chromium and Opera I can't log in to Hotmail, similar symptoms I can login to Facebook, but when I try to write a comment, or just post something in my wall, it stays loading for a long time, and then fails The first two issues seem to be related to secure pages, and the second one is another issue altogether, I believe. However, they all happen with all browsers, which is really weird. Talking about weird: I connect using a Huawei SmartAX MT 810 USB modem, which cost me blood and tears to get it working under Ubuntu. I ordered an ethernet modem/router with my ISP, and I'm still waiting, but this issue intrigues me anyway. Has anyone experienced this kind of problems? I Googled around, but couldn't find a similar case.

    Read the article

  • ISP Couldn't Verify My HFC MAC Id

    - by Ann Rahn
    An Internet Service Provider whom I used in the past, claims to provide me with Internet service. However, when I gave his technician the Hybrid Fiber - COX MAC ID on my Modem, he could not find it on his list. Also, the Internet Service Provider doesn't even have my current mailing address. All he has is my phone number and he is demanding payment. On the other hand, I use a cable service which provide TV. At this point, the Internet Service Provider is threatening to disconnect my Internet service. My question: How can I verify if I am getting the service from him or through the cable?

    Read the article

  • Package Manager cannot access repositories but internet is working

    - by kazman
    I am currently at a conference in another country and my package manager cannot access repositories. My internet is working fine and I can ping the repositories or go to them in a browser, but package manager fails to access them. If I sudo apt-get update it throws Something wicked happened resolving 'wwwproxy:3128' (-5 - No address associated with hostname) (or Ign's). This proxy corresponds to my proxy at my office back at home, but I have disabled proxy in the package manager. Scanning for best repository doesn't work either, it doesn't manage to connect to any. I have searched for this online and have checked things about my apt.conf file. My apt.conf contains: Acquire::http::proxy "http://wwwproxy:3128/"; Acquire::https::proxy "https://wwwproxy:3128/"; Acquire::ftp::proxy "ftp://wwwproxy:3128/"; Acquire::socks::proxy "socks://wwwproxy:3128/"; If I remove apt.conf (or replace with blank), it makes no difference. I don't see that it should since I am connecting directly (and have set it so in my network options in Package manager network settings) I have also tried some things with resolv.conf (changing name address to primary and secondary dns) to no avail. (im not sure if this would help, following other advice) I am running 12.04. (I wrote this very quickly and wrote down everything I have tried to possibly shorten the troubleshooting process, have very limited time between lectures and need this sorted asap, my apologies)

    Read the article

  • Avoid Internet Explorer Warning when embedding Youtube on HTTPS site?

    - by pellepim
    On a HTTPS site embedding youtube clips works great in all browser, except Internet Explorer where I get this famous little warning message: "Do you want to view only the webpage content that was delivered securely? This page contains content that will not be delivered using a secure HTTPS ... etc" I've tried to solve this in several ways. The most promising one was to use the ProxyPass functionality in Apache to map to YouTube. Like this: ProxyPass: /youtube/ http://www.youtube.com ProxyPassReverse: /youtube/ http://www.youtube.com This gets rid of the annoying warning. However, the youtube SWF fails to start streaming The SWF i manage to load into the browser simply states : "An error occurred, please try again later". Potential solutions are perhaps: Download youtube FLV:s and serve them out of own domain (gah) Use custom FLV-player and stream only FLV:s from youtube over a https proxy?

    Read the article

  • Free internet radio station server software with remote broadcasting?

    - by Zachary Brown
    I am in the process of creating an internet radio station, but the two djs I have for it are not able to be in one place. I need for them to be able to login to a web based broadcasting session that still has full functionality. They need to be able to broadcast thier live shows with talk and music. The music will be stroed on the server. I have checked out the Broadwave media streaming server from NCH, but ti does not have the ability to login as a dj from a remote computer. I don't have any money for this, so I need it to be free. If this is not possible, I need it to be cheap!

    Read the article

  • When/why does Internet Explorer block installation of a (signed) ActiveX control?

    - by Geoff
    When the user visits a page that contains a signed ActiveX control that has never been seen before, I'd expect IE to ask the user for permission to install the control. But sometimes IE puts up a security warning instead. For example, consider this site, which has a test control: http://www.pcpitstop.com/testax.asp I'd expect to get this message -- and sometimes, I do: "The website wants to run the following add-on: 'XXX' from 'YYY'. If you trust the the website and the add-on and want to allow it to run, click here..." But under IE8 on XP, I usually get this instead: "To help protect your security, Internet Explorer has restricted this site from showing certain content. Click here for options..." What's going on? Any ideas? Thanks!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >