Search Results

Search found 2219 results on 89 pages for 'constant learner'.

Page 31/89 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Python to C/C++ const char question

    - by tsukemonoki
    I am extending Python with some C++ code. One of the functions I'm using has the following signature: int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict, char *format, char **kwlist, ...); (link: http://docs.python.org/release/1.5.2p2/ext/parseTupleAndKeywords.html) The parameter of interest is kwlist. In the link above, examples on how to use this function are given. In the examples, kwlist looks like: static char *kwlist[] = {"voltage", "state", "action", "type", NULL}; When I compile this using g++, I get the warning: warning: deprecated conversion from string constant to ‘char*’ So, I can change the static char* to a static const char*. Unfortunately, I can't change the Python code. So with this change, I get a different compilation error (can't convert char** to const char**). Based on what I've read here, I can turn on compiler flags to ignore the warning or I can cast each of the constant strings in the definition of kwlist to char *. Currently, I'm doing the latter. What are other solutions? Sorry if this question has been asked before. I'm new.

    Read the article

  • .NET memory leak?

    - by SA
    I have an MDI which has a child form. The child form has a DataGridView in it. I load huge amount of data in the datagrid view. When I close the child form the disposing method is called in which I dispose the datagridview this.dataGrid.Dispose(); this.dataGrid = null; When I close the form the memory doesn't go down. I use the .NET memory profiler to track the memory usage. I see that the memory usage goes high when I initially load the data grid (as expected) and then becomes constant when the loading is complete. When I close the form it still remains constant. However when I take a snapshot of the memory using the memory profiler, it goes down to what it was before loading the file. Taking memory snapshot causes it to forcefully run garbage collector. What is going on? Is there a memory leak? Or do I need to run the garbage collector forcefully? More information: When I am closing the form I no longer need the information. That is why I am not holding a reference to the data.

    Read the article

  • How to map string keys to unique integer IDs?

    - by Marek
    I have some data that comes regularily as a dump from a data souce with a string natural key that is long (up to 60 characters) and not relevant to the end user. I am using this key in a url. This makes urls too long and user unfriendly. I would like to transform the string keys into integers with the following requirements: The source dataset will change over time. The ID should be: non negative integer unique and constant even if the set of input keys changes preferrably reversible back to key (not a strong requirement) The database is rebuilt from scratch every time so I can not remember the already assigned IDs and match the new data set to existing IDs and generate sequential IDs for the added keys. There are currently around 30000 distinct keys and the set is constantly growing. How to implement a function that will map string keys to integer IDs? What I have thought about: 1. Built-in string.GetHashCode: ID(key) = Math.Abs(key.GetHashCode()) is not guaranteed to be unique (not reversible) 1.1 "Re-hashing" the built-in GetHashCode until a unique ID is generated to prevent collisions. existing IDs may change if something colliding is added to the beginning of the input data set 2. a perfect hashing function I am not sure if this can generate constant IDs if the set of inputs changes (not reversible) 3. translate to base 36/64/?? does not shorten the long keys enough What are the other options?

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

  • Singletons and constants

    - by devoured elysium
    I am making a program which makes use of a couple of constants. At first, each time I needed to use a constant, I'd define it as //C# private static readonly int MyConstant = xxx; //Java private static final int MyConstant = xxx; in the class where I'd need it. After some time, I started to realise that some constants would be needed in more than one class. At this time, I had 3 choises: To define them in the different classes that needed it. This leads to repetition. If by some reason later I need to change one of them, I'd have to check in all classes to replace them everywhere. To define a static class/singleton with all the constants as public. If I needed a constant X in ClassA, ClassB and ClassC, I could just define it in ClassA as public, and then have ClassB and ClassC refer to them. This solution doesn't seem that good to me as it introduces even more dependencies as the classes already have between them. I ended up implementing my code with the second option. Is that the best alternative? I feel I am probably missing some other better alternative. What worries me about using the singleton here is that it is nowhere clear to a user of the class that this class is using the singleton. Maybe I could create a ConstantsClass that held all the constants needed and then I'd pass it in the constructor to the classes that'd need it? Thanks

    Read the article

  • const object in c++

    - by Codenotguru
    I have a question on constant objects. In the following program: class const_check{ int a; public: const_check(int i); void print() const; void print2(); }; const_check::const_check(int i):a(i) {} void const_check::print() const { int a=19; cout<<"The value in a is:"<<a; } void const_check::print2() { int a=10; cout<<"The value in a is:"<<a; } int main(){ const_check b(5); const const_check c(6); b.print2(); c.print(); } void print() is constant member function of the class const_check, so according to the definition of constants if any attempt to change int a it should result in an error but the program works fine for me.I think i am having some confusion here, can anybody tell me why the compiler is not flagging it as an error??

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • What will be important in Training in 2011?

    - by anders.northeved
      Now that we have started a new year I would like to give you a list of topics I think we will be discussing in training and learning in 2011. Some of the areas we have discussed earlier will still be just as important in 2011: Time-to-knowledge Still one of the most important issues for the training department. Internal content production Related to time-to-knowledge. How do we convert internal knowledge to a format that can be used for teaching others? LMS integration How do we get our existing LMS fully integrated with our other ERP modules like HCM, Order Management, Finance, Payroll etc. Some areas have been discussed before, but we’ll focus more on these in 2011: Combining internal and external training A majority of training departments use a combination of external and internal training. Having the right mix is vital for the quality and efficiency for most training organizations. Certification More rules and regulations means managing all employee certifications is more important than ever. Evolving trends in 2011: Social Learning We have been talking about this for a long time, but 2011 will be the year where we will start using it for real (OK, I also said so last year – but this year I’m right…). Real-life use of SCORM 2004 Again a topic we have talked about for a long time, but we are now actually starting to use it to give learners a better e-learning experience. How do we engage and delight the learner? e-learning makes economical sense, it can be easy to understand, it is convenient – but how do we make it more engaging and delight our learners? How to include more types of training in LMS One of the main focus area of 2011 will be how to manage and measure mobile learning , on-the-job-training and other forms of training in the LMS. Mobile Learning With the ever growing use of smart phones mobile learning will be THE hot topic of 2011 in the training world. New topics we will begin discussing in 2011: What is beyond web 2.0 and social learning? - could it be content verification and personal accreditation? Why gaming will not be the silver bullet for all types of e-learning Many people believe gaming can be used for any kind of training, but the creation is too expensive and time consuming for most applications. Do you agree with these predictions? What are your own predictions? Let me see your comments! (photo: © Marti, photoxpress.com)

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): http://www.amazon.com/Introducing-Microsoft-Pro-Developer-David-Platt/dp/0735619182/ref=sr_1_1?s=books&ie=UTF8&qid=1296339237&sr=1-1 Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: http://www.amazon.com/Head-First-2E-Real-World-Programming/dp/1449380344/ref=sr_1_1?ie=UTF8&qid=1296339176&sr=8-1 Microsoft Visual C# 2008 Step by Step : http://www.amazon.com/Microsoft-Visual-2008-Step/dp/0735624305/ref=sr_1_1?s=books&ie=UTF8&qid=1296339208&sr=1-1 c The first place to start is at the official site for the certification. That’s here: http://www.microsoft.com/learning/en/us/Exam.aspx?ID=70-583&Locale=en-us c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Links: My Notes:

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • Why is Transmission constantly active?

    - by Dov
    I am using Transmission (1.92, the latest version) in Mac OS X 10.6.2 Snow Leopard, and have noticed that, without any torrents (at all, not even paused), Little Snitch reports constant activity from it. What's going on? I'd always assumed when I saw that with Torrents loaded but inactive, it had to do with the DHT, or some such kind of scanning activity. But what could it be doing when no torrents are loaded at all?

    Read the article

  • Script Editor in Snow Leopard painfully slow after adding apps to Library

    - by Kio Dane
    I have four different Macs that I use from time to time, and on each of them I notice a constant: adding more items to AppleScript Editor's Library window slows performance of mundane operations (opening a dictionary, switching between Library window and editor window, scrolling in the Library window, etc). In Leopard, I noticed little to no latency in opening a dictionary in Script Editor, but Snow Leopard's AppleScript Editor kills my productivity by making me wait on it with most UI interactions with the Library window.

    Read the article

  • slow pppoe connection using Ubuntu 9.10

    - by Radu
    I have a Compaq Presario CQ61, instaled Ubuntu 9.10 and Windows 7 on it. It works great except the PPPoE connection in Ubuntu, when i dial in Windows my download speed reach up to 91 Mb, rebooted in Ubuntu, downloaded same file from the same server with a speed of maximum 3 Mb, cheked in Windows again 80 - 90 Mb constant. I can't figure what slow's the internet connection in Ubuntu. Anyone has an ideea on this problem ? (NO iptables configured, NO HTB, CBQ ...etc configured) . Thank you

    Read the article

  • alternative filesystems for SSD

    - by freedrull
    I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?

    Read the article

  • Error with Phoenix placeholder _val in Boost.Spirit.Lex :(

    - by GooRoo
    Hello, everybody. I'm newbie in Boost.Spirit.Lex. Some strange error appears every time I try to use lex::_val in semantics actions in my simple lexer: #ifndef _TOKENS_H_ #define _TOKENS_H_ #include <iostream> #include <string> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace lex = boost::spirit::lex; namespace phx = boost::phoenix; enum tokenids { ID_IDENTIFICATOR = 1, ID_CONSTANT, ID_OPERATION, ID_BRACKET, ID_WHITESPACES }; template <typename Lexer> struct mega_tokens : lex::lexer<Lexer> { mega_tokens() : identifier(L"[a-zA-Z_][a-zA-Z0-9_]*", ID_IDENTIFICATOR) , constant (L"[0-9]+(\\.[0-9]+)?", ID_CONSTANT ) , operation (L"[\\+\\-\\*/]", ID_OPERATION ) , bracket (L"[\\(\\)\\[\\]]", ID_BRACKET ) { using lex::_tokenid; using lex::_val; using phx::val; this->self = operation [ std::wcout << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] | identifier [ std::wcout << val(L'<') << _tokenid << val(L':') << _val << val(L'>') ] | constant [ std::wcout << val(L'<') << _tokenid // << val(L':') << _val << val(L'>') ] | bracket [ std::wcout << phx::val(lex::_val) << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] ; } lex::token_def<wchar_t, wchar_t> operation; lex::token_def<std::wstring, wchar_t> identifier; lex::token_def<double, wchar_t> constant; lex::token_def<wchar_t, wchar_t> bracket; }; #endif // _TOKENS_H_ and #include <cstdlib> #include <iostream> #include <locale> #include <boost/spirit/include/lex_lexertl.hpp> #include "tokens.h" int main() { setlocale(LC_ALL, "Russian"); namespace lex = boost::spirit::lex; typedef std::wstring::iterator base_iterator; typedef lex::lexertl::token < base_iterator, boost::mpl::vector<wchar_t, std::wstring, double, wchar_t>, boost::mpl::true_ > token_type; typedef lex::lexertl::actor_lexer<token_type> lexer_type; typedef mega_tokens<lexer_type>::iterator_type iterator_type; mega_tokens<lexer_type> mega_lexer; std::wstring str = L"alfa+x1*(2.836-x2[i])"; base_iterator first = str.begin(); bool r = lex::tokenize(first, str.end(), mega_lexer); if (r) { std::wcout << L"Success" << std::endl; } else { std::wstring rest(first, str.end()); std::wcerr << L"Lexical analysis failed\n" << L"stopped at: \"" << rest << L"\"\n"; } return EXIT_SUCCESS; } This code causes an error in Boost header 'boost/spirit/home/lex/argument.hpp' on line 167 while compiling: return: can't convert 'const boost::variant' to 'boost::variant &' When I don't use lex::_val program compiles with no errors. Obviously, I use _val in wrong way, but I do not know how to do this correctly. Help, please! :) P.S. And sorry for my terrible English…

    Read the article

  • What is some good lossless video codec for recording gameplay?

    - by Don Salva
    I'm an avid gamer and I like to record my gameplay. Usually I've been using Fraps to do it, however I'm thinking of switching to Dxtory as it allows to write on multiple HDDs at once. Say I have 3 HDDs with the following write speeds: HDD1 with 50 mb/s, HDD2 with 22 mb/s and HDD3 with 45 mb/s. Combined write speed would be: 117 mb/s. Dxtory allows you to utilize all 3 HDD's at once while recording your gameplay. Using this formula: RGB24 YUV24: Width x Height x 3 x fps = bitrate (byte/sec) YUV420: Width x Height x 3 / 2 x fps = bitrate (byte/sec) YUV410: Width x Height x 9 / 8 x fps = bitrate (byte/sec) And recording in YUV420 colorspace at 1920x1080 with 30 fps I'd need about 95 mb/s write speed. Dxtory is good because it allows me to play with constant 60 fps while recording in 30 fps. Fraps does not (even though they say it does), once you start recording with Fraps, the game's fps drops. So I'm looking for a codec that doesn't need a very high write speed (bitrate) yet records in good (lossless) quality. Dxtory comes with its own codec, the Dxtory codec. Which allows me some experimentation. Fraps has it's own codec which I can use in Dxtory to expirement around. I also came across http://lags.leetcode.net/codec.html . Are there more lossless codecs out there (besides Fraps' and Dxtory's) which are good for what I want to do? Edit: To clarify, yes, I'm aware a lossless codec always has "good" quality. But that's not what I'm looking for. Let me take the Fraps codec and Dxtory codec to clarify what I'm looking for. When I record with the Dxtory codec in RGB colorspace at 1920x1080 with targeted 30 fps, I can play the game at 60 fps, BUT I'm recording with 10-15 fps, that's because RGB with Dxtory needs much, much more write speed than my hdd can handle. When recording with Dxtory codec in YUV410 colorspace at 1920x1080 with targeted 30 fps, I can play at 60 fps and record at 30 fps, again, that's because YUV410 in Dxtory's codec takes much, much less write speed than RGB When recording with Fraps codec in ??? (I dunno the color space Fraps records in, I guess YUV420), I can play with 60 fps and record with 30 fps. What I'm looking for is a lossless codec that can record in YUV420 (or even RGB??) which does not exceed a write speed (or bitrate if you will) of 100 mb/s in 1920x1080 or in other words, which will allow me to record in constant 30fps. Obviously the best solution would be to buy an SDD, but that's not what I'm after.

    Read the article

  • How to identify heavy write to disk?

    - by Darth
    I have this problem with server running CakePHP application. The server is insanely slow, I first thought that it's application problem, but then I found constant 5-6MB/s write to disk. What is the easiest way to find cause of such a heavy write? The server is running Gentoo.

    Read the article

  • slow pppoe connection using Ubuntu 9.10

    - by Radu
    I have a Compaq Presario CQ61, instaled Ubuntu 9.10 and Windows 7 on it. It works great except the PPPoE connection in Ubuntu, when i dial in Windows my download speed reach up to 91 Mb, rebooted in Ubuntu, downloaded same file from the same server with a speed of maximum 3 Mb, cheked in Windows again 80 - 90 Mb constant. I can't figure what slow's the internet connection in Ubuntu. Anyone has an ideea on this problem ? (NO iptables configured, NO HTB, CBQ ...etc configured) . Thank you

    Read the article

  • avconv - not working with movflags

    - by MarKa
    With the Parameter "-movflags frag_custom" my avconv says [mp4 muxer @ 0x8d05f60] [Eval @ 0x7fffcc763f00] Undefined constant or missing '(' in 'frag_custom' [mp4 muxer @ 0x8d05f60] Unable to parse option value "frag_custom" [mp4 muxer @ 0x8d05f60] Error setting option movflags to value frag_custom. But with avconv is compiled with a mp4 muxer, so whats the problem ? Version is avconv 0.8.1-4:0.8.1-0ubuntu1

    Read the article

  • Remote boot and login with mac and iPhone?

    - by Moshe
    I need (free) software to login to my iMac from my iPod. TeamViewer works but the iMac puts itself to sleep. Are there any programs that can turn on my iMac remotely? Alternatively, are tere any settings that I can change to keep my iMac on at all times and ensure constant availability?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >