Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 40/568 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Small font all 'new' messages suddenly in Gmail on Ubuntu in Opera, why?

    - by Michael Durrant
    This seemed to happen out of the blue. I have been using Opera for years. I can't find anywhere in Gmail to change the default font that is used in Opera. Ubuntu is Version 11 Opera is version 11.6 In Firefox the font is normal size. I have tried playing around with both all the browser setting I can find plus Ubuntu system settings (I had seen some mentions of Opera using the system default font sometimes) but no success so far. Really bumming as I have been using Opera for two years and if I can't resolve it I will not be able to use it. I can switch to Firefox but I don't want to.

    Read the article

  • I run about 100 small traffic websites, what host would you recommend (expansion is planned)?

    - by MALON
    I know there are plenty of suggestions like asmallorange, linode, etc, but how well do these apply to someone who is running 100 sites? Traffic can be anywhere from zero hits a month up to about 1,000. The host I'm using right now doesn't allow access to httpd.conf or other important apache features. If I had to guess, it seems like Linode or other services like it are right up my alley, however, I am not great with linux. I can get by alright in Ubuntu, but that's about it. Will this knowledge be enough to get by with Linode? What about domain name transfers? The way it works now for me is if someone has an existing site, I ask them to get the domain transfer code and then I send the domain name xfer code to my current host and they take care of the rest. Does Linode take care of domain name transfers? How do I do it?

    Read the article

  • How to train users converting from PC to Mac/Apple at a small non profit?

    - by Everette Mills
    Background: I am part of a team that provides volunteer tech support to a local non profit. We are in the position to obtain a grant to update almost all of our computers (many of them 5 to 7 year old machines running XP), provide laptops for users that need them, etc. We are considering switching our users from PC (WinXP) to Macs. The technical aspects of switching will not be an issue for the team. We are in the process of planning data conversions, machine setup, server changes, etc regardless of whether we switch to Macs or much newer PCs. About 1/4 of the staff uses or has access to a Mac at home, these users already understand the basics of using the equipment. We have another set of (generally younger) users that are technically savvy and while slightly inconvenienced and slowed for a few days should be able to switch over quickly. Finally, several members of the staff are older and have many issues using there computers today. We think in the long run switching to Macs may provide a better user experience, fewer IT headaches, and more effective use of computers. The questions we have is what resources and training (webpages, Books, online training materials or online courses) do you recommend that we provide to users to enable the switchover to happen smoothly. Especially, with a focus on providing different levels of training and support to users with different skill levels. If you have done this in your own organization, what steps were successful, what areas were less successful?

    Read the article

  • Can't connect my server outside my wireless network. server is openERP running on ubuntu 12.04 desktop router is ciso small business router

    - by user2613541
    I've looked on the internet regarding port forwarding. I've successfully fowarded port 8069 to my server's ip address. I can access openERP when I'm connected to the network of my office but not when I'm outside my office's network. What am I missing? my computer's ip address starts with 192... Do I have to first up the router's ip address and then my server's ip address to get to my server from the outside? what should I type in my internet browser? I've looked all day yesterday.

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • What are basic programs like, recursion, Fibonacci, small trick programs?

    - by Mike
    This question may seem daft (I'm a new to 'programming' and should probably stop if this is the type of question I'm required to ask)... What are: "basic programs like, recursion, fibonacci, factorial, string manipulation, small trick programs"? I've recently read Coding Horror - the non programmer and followed the links to Kegel and How to get hired. Then I delved through some similar questions here (hence the block quote) and I realised that as a fully fledged non-programmer I probably wouldn't know if I knew recursion (or any of the others) because I wouldn't know what it looked like, or why it was used, and what the results would look like after it was used. I suppose I'm trying to get a picture of "the basics". What the principles are and why we learn them - where they'll be used and what result/s your looking for. If they'll be used as an interview question during my first interview sometime in 2020 I would like to look less ignorant than those 199 out of 200 who just don't know the how, or the why, of programming. As always...I'll get my coat. Thanks Mike

    Read the article

  • How Do I Make A Small jQuery Confirm Replacement?

    - by Volomike
    I have a table row with a Delete link on it for deleting that row. I want to create a jQuery feature that pops up above that Delete link an "Are you sure?" yes/no confirm dialog, but in a very small size. How do I achieve this in the least lines of code in jQuery? I mean, I need to determine the x,y of the Delete link that was detected from a click, and then move (and fadeIn()) a DIV into that location. If they click anywhere on that page except Yes on the yes/no, the thing does a fadeOut(). If they click Yes, then it deletes the row with a fadeOut() effect. Note on this I don't need a modal screen dimmer, but the confirm needs to work modally. (Items above that are bolded are something I don't exactly know how to do. Items bolded and italicized are important to what I need to do.) I would think a widget like this would be critical to any Web 2.0 design, much like the Pines Notify feature seems essential.

    Read the article

  • How to make a small engine like Wolfram|Alpha?

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    Lets say I have three models/tables: operating_systems, words, and programming_languages: # operating_systems name:string created_by:string family:string Windows Microsoft MS-DOS Mac OS X Apple UNIX Linux Linus Torvalds UNIX UNIX AT&T UNIX # words word:string defenitions:string window (serialized hash of defenitions) hello (serialized hash of defenitions) UNIX (serialized hash of defenitions) # programming_languages name:string created_by:string example_code:text C++ Bjarne Stroustrup #include <iostream> etc... HelloWorld Jeff Skeet h AnotherOne Jon Atwood imports 'SORULEZ.cs' etc... When a user searches hello, the system shows the defenitions of 'hello'. This is relatively easy to implement. However, when a user searches UNIX, the engine must choose: word or operating_system. Also, when a user searches windows (small letter 'w'), the engine chooses word, but should also show Assuming 'windows' is a word. Use as an <a href="etc..">operating system</a> instead. Can anyone point me in the right direction with parsing and choosing the topic of the search query? Thanks. Note: it doesn't need to be able to perform calculations as WA can do.

    Read the article

  • How can you make the copyright text in a Google Map wrap when the map is small?

    - by Paul D. Waite
    When you embed a Google Map on a web page, copyright text is included on the map. This is the HTML: <div style="border-top: 10px solid rgb(204, 0, 0); -moz-user-select: none; z-index: 0; position: absolute; right: 3px; bottom: 2px; color: black; font-family: Arial,sans-serif; font-size: 11px; white-space: normal; text-align: right; margin-left: 70px; width: 210px;" dir="ltr"> <span></span> <span>Map data &copy;2010 LeadDog Consulting, Europa Technologies - </span> <a href="http://www.google.com/intl/en_ALL/help/terms_maps.html" target="_blank" class="gmnoprint terms-of-use-link" style="color: rgb(119, 119, 204);">Terms of Use</a> <span></span> </div> If you embed a map with a small width, the copyright text extends outside of the <div>, instead of wrapping within it. I’ve tried using jQuery to select this HTML based on its contents (using :contains()), but it doesn’t seem to work in IE 8 (which is odd, as it works fine in IE 7). Any idea what’s up with IE 8? Any other methods to achieve the same result?

    Read the article

  • problem with evolutionary algorithms degrading into simulated annealing: mutation too small?

    - by Schnalle
    i have a problem understanding evolutionary algorithms. i tried using this technique several times, but i always ran into the same problem: degeneration into simulated annealing. lets say my initial population, with fitness in brackets, is: A (7), B (9), C (14), D (19) after mating and mutation i have following children: AB (8.3), AC (12.2), AD (14.1), BC(11), BD (14.7), CD (17) after elimination of the weakest, we get A, AB, B, AC next turn, AB will mate again with a result around 8, pushing AC out. next turn, AB again, pushing B out (assuming mutation changes fitness mostly in the 1 range). now, after only a few turns the pool is populated with the originally fittest candidates (A, B) and mutations of those two (AB). this happens regardless of the size of the initial pool, it just takes a bit longer. say, with an initial population of 50 it takes 50 turns, then all others are eliminated, turning the whole setup in a more complicated simulated annealing. in the beginning i also mated canditates with themselves, worsening the problem. so, what do i miss? are my mutation rates simply too small and will it go away if i increase them? here's the project i'm using it for: http://stefan.schallerl.com/simuan-grid-grad/ yeah, the code is buggy and the interface sucks, but i'm too lazy to fix it right now - and be careful, it may lock up your browser. better use chrome, even thought firefox is not slower than chrome for once (probably the tracing for the image comparison pays off, yay!). if anyone is interested, the code can be found here. here i just dropped the ev-alg idea and went for simulated annealing. ps: i'm not even sure about simulated annealing - it is like evolutionary algorithms, just with a population size of one, right?

    Read the article

  • Idea for a small project, should I use Python?

    - by Robb
    I have a project idea, but unsure if using Python would be a good idea. Firstly, I'm a C++ and C# developer with some SQL experience. My day job is C++. I have a project idea i'd like to create and was considering developing it in a language I don't know. Python seems to be popular and has piqued my interests. I definitely use OOP in programming and I understand Python would work fine with that style. I could be way off on this, I've only read small bits and pieces about the language. The project won't be public or anything, just purely something of my own creation do dabble in at home. So the project would essentially represent a simple game idea I have. The game would consist roughly these things: Data structures to hold specific information (would be strongly typed). A way to output the gamestate for the players. This is completely up in the air, it can be graphical or text based, I don't really care at this point. A way to save off game data for the players in something like a database or file system. A relatively easy way for me to input information and a 'GO' button which processes the changes and obviously creates a new gamestate. The game would function similar to a board game. Really nothing out of the ordinary when I look back at that list. Would this be a fun way to learn Python or should I select another language?

    Read the article

  • Work for huge company or small company that makes products for huge company?

    - by TheGambler
    I'm facing a career decision. I currently work for a huge, huge, huge company (this should bring 3-4 companies to mind) as a programmer. I work on a really big (3 million lines of code) J2EE web application. I've been out of college for 2 years and I pretty much know that big corporate life isn't for me. There is a career opportunity to work for a small company that has a lot of funding and huge potential to sell their main project to this huge company I work for.This company makes kiosk software and works with a nearby kiosk software company. The deal isn't done, but I would come straight in to work on this project. This would be more of a interactive media position and I would do the development in flash/action script. I personally think this will be more interesting work then what I do now. This company does have other clients right now, just not on the scale of this potential huge client. I was told that my initial salary would be equal to what I'm making now, but if we land this client it could go up a lot more. I have a wife and child which should and will play a part in my decision if I should take this job. We do though have family that if crap hit the fan we could live with until we got back on our feet. So with the information provided, does this sound like a good opportunity and should I take it? This is a huge decision and the reason I'm asking the question here is because there are a lot of people here that have probably faced this decision before.

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • How should I deal with floating numbers that numbers that can get so small that the become zero

    - by Tristan Havelick
    So I just fixed an interesting bug in the following code, but I'm not sure the approach I took it the best: p = 1 probabilities = [ ... ] # a (possibly) long list of numbers between 0 and 1 for wp in probabilities: if (wp > 0): p *= wp # Take the natural log, this crashes when 'probabilites' is long enough that p ends up # being zero try: result = math.log(p) Because the result doesn't need to be exact, I solved this by simply keeping the smallest non-zero value, and using that if p ever becomes 0. p = 1 probabilities = [ ... ] # a long list of numbers between 0 and 1 for wp in probabilities: if (wp > 0): old_p = p p *= wp if p == 0: # we've gotten so small, its just 0, so go back to the smallest # non-zero we had p = old_p break # Take the natural log, this crashes when 'probabilites' is long enough that p ends up # being zero try: result = math.log(p) This works, but it seems a bit kludgy to me. I don't do a ton of this kind of numerical programming, and I'm not sure if this is the kind of fix people use, or if there is something better I can go for.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • when to use Hibernate vs. Simple ResultSets for small application

    - by luke
    I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service. Basically the workflow is Listen for a network event Parse the data packet, extract a set of identifiers Map the identifier set to a primary key in our database Parse the rest of the packet and insert items in a related table using the foreign key found in step 3 Repeat in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component. So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon. So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it? UPDATE due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.

    Read the article

  • What is the most efficient way to handle points / small vectors in JavaScript?

    - by Chris
    Currently I'm creating an web based (= JavaScript) application thata is using a lot of "points" (= small, fixed size vectors). There are basically two obvious ways of representing them: var pointA = [ xValue, yValue ]; and var pointB = { x: xValue, y: yValue }; So translating my point a bit would look like: var pointAtrans = [ pointA[0] + 3, pointA[1] + 4 ]; var pointBtrans = { x: pointB.x + 3, pointB.y + 4 }; Both are easy to handle from a programmer point of view (the object variant is a bit more readable, especially as I'm mostly dealing with 2D data, seldom with 3D and hardly with 4D - but never more. It'll allways fit into x,y,z and w) But my question is now: What is the most efficient way from the language perspective - theoretically and in real implementations? What are the memory requirements? What are the setup costs of an array vs. an object? ... My target browsers are FireFox and the Webkit based ones (Chromium, Safari), but it wouldn't hurt to have a great (= fast) experience under IE and Opera as well...

    Read the article

  • Why might changes be populated from one NSManagedObjectContext to another without an explicit merge?

    - by Mike Laurence
    I'm working on an object import feature that utilizes multiple threads/NSManagedObjectContexts, using http://www.mac-developer-network.com/columns/coredata/may2009/ as my guide (note that I am developing for iPhone). For some reason, when I save one of my contexts the other is immediately updated with the changes, even though I have commented out my calls to mergeChangesFromContextDidSaveNotification. Are there any reasons the contexts might be merging into one another without an explicit call? Here a log of what's going on: // 1.) Main context is saved with "Peter Gabriel" // 2.) Test context is created, begins with same contents as main context // 3.) Main context is inserted with "Spoon" // 4.) Test context is inserted with "Phoenix" // Contents at this point: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) // 5.) testContext is saved // New contents of contexts: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Phoenix", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) As you can see, the test context is saved midway through, and the main context suddenly has the new objects from the test context, even though I haven't performed the whole NSManagedObjectContextDidSaveNotification/mergeChangesFromContext combo. My understanding is that no changes will ever be merged unless done so explicitly... does anyone know what's going on here?

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • How to perform a Linq2Sql query on the following dataset

    - by Bas
    I have the following tables: Person(Id, FirstName, LastName) { (1, "John", "Doe"), (2, "Peter", "Svendson") (3, "Ola", "Hansen") (4, "Mary", "Pettersen") } Sports(Id, Name) { (1, "Tennis") (2, "Soccer") (3, "Hockey") } SportsPerPerson(Id, PersonId, SportsId) { (1, 1, 1) (2, 1, 3) (3, 2, 2) (4, 2, 3) (5, 3, 2) (6, 4, 1) (7, 4, 2) (8, 4, 3) } Looking at the tables, we can conclude the following facts: John plays Tennis John plays Hockey Peter plays Soccer Peter plays Hockey Ola plays Soccer Mary plays Tennis Mary plays Soccer Mary plays Hockey Now I would like to create a Linq2Sql query which retrieves the following: Get all Persons who play Hockey and Soccer Executing the query should return: Peter and Mary Anyone has any idea's on how to approach this in Linq2Sql?

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • Eye-Infinity across 3 displays with a Radeon 6800

    - by Peter G Mac.
    So I purchased a computer recently and have been trying to customise the display. Radeon HD 6800 series Ubuntu 10.10. I have three 22inch 1080P lcd monitors that are mounted together. Everything is working smooth. How do I get the 'big-desktop' display where I have one enormous display across all monitors? Linux - ATI Catalyst Control Center 11.2 does not give me an option to 'group' my profiles like the pictures on their site show with Windows. I have been searching all over for help. Much Obliged, -Peter

    Read the article

  • Concatenate Gridview Data

    - by zahid mahmood
    I have a gridview with following Data CustomerName item qty tom sugar 1 kg Peter Rice 2 Kg Jhone Sugar .5 kg tom Rice 5 Kg Peter Tea .5 Kg tom Tea 1 kg now I want to display data with the following format: tom sugar 1kg, Rice 5 kg, Tea 1 kg Peter Rice 1kg, Tea .5 kg Jhone Sugar .5kg how to achieve this

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >