Search Results

Search found 6931 results on 278 pages for 'almost surely'.

Page 228/278 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Google Chrome Extension : Port: Could not establish connection. Receiving end does not exist

    - by tcornelis
    I have been looking for an answer for almost a week now, but having read all the stackoverflow items i can't seem to find a solution that is working for me. The error that i'm having is : Port: Could not establish connection. Receiving end does not exist. lastError:30 set lastError:30 dispatchOnDisconnect messaging:277 folder layout : img developer_icon.png js sidebar.js main.js jquery-2.0.3.js manifest.json my the manifest.json file looks something like this (it is version 2) :` "browser_action": { "default_icon": "./img/developer_icon.png" }, "content_scripts": [ { "matches": ["*://*/*"], "js": ["./js/sidebar.js"], "run_at": "document_end" } ], "background" : { "scripts" : ["./js/main.js","./js/jquery-2.0.3.js"] }, I want to handle the user clicking the extension icon so i could inject a sidebar in the existing website (because the extension i would like to develop requires that amount of space). So in main.js : chrome.browserAction.onClicked.addListener(function(tab) { chrome.tabs.getSelected(null, function(tab){ chrome.tabs.sendMessage( //Selected tab id tab.id, //Params inside a object data {callFunction: "toggleSidebar"}, //Optional callback function function(response) { console.log(response); } ); }); }); and in sidebar.js : chrome.runtime.onMessage.addListener(function(req,sender,sendResponse){ console.log("sidebar handling request"); toggleSidebar(); }); but i'm never able to see the console.log in my console because of the error. Does someone know what i did wrong? Thanks in advance!

    Read the article

  • Why can't I play videos on my Samsung Moment w/Android 2.1 that did play on 1.5?

    - by cpcarroll71
    Hello all. I was directed here by the Google Android Devs Group. I'm still not sure if this is the place I need to be. I've been on forum after forum and asked the same question but no one seems to know the answer or will tell me the REAL correct place to find it. I have a Samsung Moment with Android 2.1. I can not get videos that DID play on Android 1.5 to play on 2.1 now with very few exceptions. I'm not just talking about different videos with the same format, I'm talking about some of the exact same videos that did play in 1.5 will no longer play in 2.1. These are almost all AVI files. If Android doesn't support AVI natively with it's own stock media player then why or how was I able to play these videos flawlessly on 1.5? The way I always played the videos was viewing my SD card, tapping the thumbnail of the video I wanted, and it would automatically open up in the phones media player right away. Now the phone locks up and then eventually gives an error message. Please tell me someone here knows what is going on and if there is a way to fix this. Or if nothing else, tell me where I do need to look to ask this question and find an answer. Thank you.

    Read the article

  • In XSLT, how can you sort using an indirect key?

    - by edholder
    I am having trouble getting xsl:sort to understand the scope of the attributes I am referencing. Here is an XML sample document to illustrate: <Root> <DrinkSelections> <Drink id=1000 name="Coffee"/> <Drink id=1001 name="Water"/> <Drink id=1002 name="Tea"/> <Drink id=1003 name="Almost But Not Quite Entirely Unlike Tea"/> </DrinkSelections> <CustomerOrder> <Drinks> <Drink oid="1001"/> <Drink oid="1002"/> <Drink oid="1003"/> </Drinks> </CustomerOrder </Root> I want to produce a list of drinks (sorted by name) contained in the CustomerOrder. Here is the XSLT code I am fiddling with: <xsl:for-each select="/Root/CustomerOrder/Drinks/Drink"> <xsl:sort select="/Root/DrinkSelections/Drink[@id = @oid]/@name"/> <xsl:variable name=var_oid select="@oid"/> <xsl:value-of select="/Root/DrinkSelections/Drink[@id = $var_oid]/@name"/> </xsl:for-each> Apparently, the xsl:sort command is trying to apply the "oid" attribute to the Drink elements in DrinkSelections, rather than local Drink element. I can get around this using a variable, as in the xsl:value-of statement. But since xsl:sort must be the first statement after the xsl:for-each statement, I can't insert the xsl:variable statement before xsl:sort. Is there a way to explicitly state that the attribute value should be taken from the "local" element?

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • [Apache] Creating rewrite rules for multiple urls in the same folder

    - by DavidYell
    I have been asked by our client to convert a site we created into SEO friendly url format. I've managed to crack a small way into this, but have hit a problem with having the same urls in the same folder. I am trying to rewrite the following urls, /review/index.php?cid=intercasino /review/submit.php?cid=intercasino /review/index.php?cid=intercasino&page=2#reviews I would like to get them to, /review/intercasino /submit-review/intercasino /review/intercasino/2#reviews I've almost got it working using the following rule, RewriteRule (submit-review)/(.*)$ review/submit.php?cid=$2 [L] RewriteRule (^review)/(.*) review/index.php?cid=$2 The problem, you may already see, is that /submit-review rewrites to /review, which in turn gets rewritten to index.php, thus my review submission page is lost in place of my index page. I figured that putting [L] would prevent the second rule being called, but it seems that it rewrites both urls in two seperate passes. I've also tried [QSE], and [S=1] I would rather not have to move my files into different folders to get the rewriting to work, as that just seems too much like bad practise. If anyone could give me some pointers on how to differentiate between these similar urls that would be great! Thanks (Ref: http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html)

    Read the article

  • How do I load PersistentDocuments into the same window

    - by Brad Stone
    I want to open NSPersistentDocuments and load them into the same window one at a time. I'm almost there but missing some steps. Hopefully someone can help me. I have a few saved documents on the hard drive. On launch my app opens to an untitled NSPersistentDocument and creates a separate NSWindowController. When I press the button to load file 1 off the hard drive the data appears in the fields but two things are wrong that I can see: 1) changing the data doesn't make the document dirty 2) choosing save updates the persistentstore (I know this because when I open the file again I see the changes) but I get an error: +entityForName: could not locate an NSManagedObjectModel for entity name 'Book' Here's my code which is in the WindowController that was launched initially with the untitled document. This code isn't perfect. For example, I know I should processPendingChanges and save the current doc before I load the new one. This is test code to try to get over this hurdle. - (IBAction)newBookTwo:(id)sender { NSDocumentController *dc = [NSDocumentController sharedDocumentController]; NSURL *url = [NSURL fileURLWithPath:[@"~/Desktop/File 2.binary" stringByExpandingTildeInPath]]; NSError *error; MainWindowDocument *thisDoc = [dc openDocumentWithContentsOfURL:url display:NO error:&error]; [self setDocument:thisDoc]; [self setManagedObjectContext:[thisDoc managedObjectContext]]; } Thanks!

    Read the article

  • I getting undefined using JSON in jQuery why?

    - by YoniGeek
    Im learning some JSON, Im trying to list some data about dogs from twitter...but I can't really present the data...I believe that the error is inside map-method...something I'm missing...thanks for yr help <body> <h1>U almost there!!</h1> <script src="jquery-1.7.1.js"> </script> <script> // PubSub (function( $ ) { var o = $( {} ); $.each({ trigger: 'publish', on: 'subscribe', off: 'unsubscribe' }, function( key, val ) { jQuery[val] = function() { o[key].apply( o, arguments ); }; }); })( jQuery ); $.getJSON('http://search.twitter.com/search.json?q=dogs&callback=?', function( info) { $.publish( 'twitter/info', info ); }); // ... $.subscribe( 'twitter/info', function( e, info ) { $('body').html( $.map( info, function( obj) { // <--- here it's error, something Im missing right? return '<li>' + obj.text + '</li>'; }).join('') ); }); </script> </body> </html>

    Read the article

  • Bad idea to have the same object, have a different side effect after method call.

    - by Nathan W
    Hi all, I'm having a bit of a gesign issue(again). Say I have this Buttonpad object: now this object is a wrapper object over one in a com object. At the moment it has a method on it called CreateInto(IComObject). Now to make a new button pad in the Com Object. You do: ButtonPad pad = new ButtonPad(); pad.Title = "Hello"; // Set some more properties. pad.CreateInto(Cominstance); The createinfo method will excute the right commands to buid the button pad in the com object. After it has been created it any calls against it are foward to the underlying object for change so: pad.Title = "New title"; will call the com object to set the title and also set the internal title variable. Basically any calls before the CreateInfo method only affect the .NET object anything after has the side effect of calling the com object also. I'm not very good at sequence diagrams but here is my attempt to explain whats going on: This doesn't feel good to me, it feels like I'm lying to the user about what the button pad does. I was going to have a object called WrappedButtonPad, which is returned from CreateInto and the user could make calls against that to make changes to the Com Object, but I feel having two objects that almost do the same thing but only differ by names might be even worse. Are these valid designs, or am I right to be worried? How else would you handle a object the can create and query a com object?

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • Simple question about the lunarlander example.

    - by Smills
    I am basing my game off the lunarlander example. This is the run loop I am using (very similar to what is used in lunarlander). I am getting considerable performance issues associated with my drawing, even if I draw almost nothing. I noticed the below method. Why is the canvas being created and set to null each cycle? @Override public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas();//null synchronized (mSurfaceHolder) { updatePhysics(); doDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } Most of the times I have read anything about canvases it is more along the lines of: mField = new Bitmap(...dimensions...); Canvas c = new Canvas(mField); My question is: why is Google's example done that way (null canvas), what are the benefits of this, and is there a faster way to do it?

    Read the article

  • Resumable Upload in Ruby on Rails

    - by user253011
    Hi, I have been searching for a way for resumable file upload in RoR. In conclusion, I found out other than Java Applet no client-side-and-cross-platform agent can access the file system in such a way that to request the file from the position where the upload got terminated (due to any reason) with some exceptions like http://github.com/taf2/resume-up/tree/master (built in native Ruby, but requires google gears which is not "reliable" yet when it comes to cross platform almost same story as of ActiveX!) Since the only reliable option left is java applet, is there any good tutorial/forum/documentation for those paid java applets, such as "thin slice upload" etc. to make it work with rails application. I have found one http : // github . com / dassi / mediaclue , its a non-multi-ligual-German-Application in which they used jumploader. But in that application, I am unable to see resumable functionality. Scratching my head against their documentation, i found out http : // jumploader.com / doc_resume.html It tells that jumploader has resume functionality in Cross session resume, the one i am looking for (if the user close the browser the new session gets hold on uncompleted uploaded files from the old session against the user id). But I cant find any example on their demos page which actually pause/RESUME functionality in a continuous manner! Is it even possible to achieve that kind of resumable functionality. Please tell me about any options/example/demos preferable deployed in rails. I shall be very much obliged. ~ Thanks

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Simplifying loop in Objective-C

    - by Joe Habadas
    I have this enormous loop in my code (not by choice), because I can't seem to make it work any other way. If there's some way make this simple as opposed to me repeating it +20 times that would be great, thanks. for (NSUInteger i = 0; i < 20; i++) { if (a[0] == 0xFF || b[i] == a[0]) { c[0] = b[i]; if (d[0] == 0xFF) { d[0] = c[0]; } ... below repeats +18 more times with [i+2,3,4,etc] ... if (a[1] == 0xFF || b[i + 1] == a[1]) { c[1] = b[i + 1]; if (d[1] == 0xFF) { d[1] = c[1]; } ... when it reaches the last one it calls a method ... [self doSomething]; continue; i += 19; ... then } repeats +19 times (to close things)... } } } I've tried almost every possible combo of things that I know of attempting to make this smaller and efficient. Take a look at my flow chart — pretty huh? i'm not a madman, honest.

    Read the article

  • C++ Template Class Constructor with Variable Arguments

    - by david
    Is it possible to create a template function that takes a variable number of arguments, for example, in this Vector< T, C class constructor: template < typename T, uint C > Vector< T, C >::Vector( T, ... ) { assert( C > 0 ); va_list arg_list; va_start( arg_list, C ); for( uint i = 0; i < C; i++ ) { m_data[ i ] = va_arg( arg_list, T ); } va_end( arg_list ); } This almost works, but if someone calls Vector< double, 3 ( 1, 1, 1 ), only the first argument has the correct value. I suspect that the first parameter is correct because it is cast to a double during the function call, and that the others are interpreted as ints and then the bits are stuffed into a double. Calling Vector< double, 3 ( 1.0, 1.0, 1.0 ) gives the desired results. Is there a preferred way to do something like this?

    Read the article

  • Troubles moving a UIView.

    - by Joshua
    I have been trying to move a UIView by following a users touch. I have almost got it to work except for one thing, the UIView keeps flicking between two places. Here's the code I have been using: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchDown"); UITouch *touch = [touches anyObject]; firstTouch = [touch locationInView:self.view]; lastTouch = [touch locationInView:self.view]; [self.view setNeedsDisplay]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { InSightViewController *contentView = [[InSightViewController alloc] initWithNibName:@"SubView" bundle:[NSBundle mainBundle]]; [contentView loadView]; UITouch *touch = [touches anyObject]; currentTouch = [touch locationInView:self.view]; if (CGRectContainsPoint(contentView.view.bounds, firstTouch)) { NSLog(@"touch in subView/contentView"); sub.frame = CGRectMake(currentTouch.x - 50.0, currentTouch.y, 130.0, 21.0); } NSLog(@"touch moved"); lastTouch = currentTouch; [self.view setNeedsDisplay]; } And here's what's been happening: http://cl.ly/Sjx

    Read the article

  • Is this a safe way to release resources in Java?

    - by palto
    Usually when code needs some resource that needs to be released I see it done like this: InputStream in = null; try{ in = new FileInputStream("myfile.txt"); doSomethingWithStream(in); }finally{ if(in != null){ in.close(); } } What I don't like is that you have to initialize the variable to null and after that set it to another value and in the finally block check if the resource was initialized by checking if it is null. If it is not null, it needs to be released. I know I'm nitpicking, but I feel like this could be done cleaner. What I would like to do is this: InputStream in = new FileInputStream("myfile.txt"); try{ doSomethingWithStream(in); }finally{ in.close(); } To my eyes this looks almost as safe as the previous one. If resource initialization fails and it throws an exception, there's nothing to be done(since I didn't get the resource) so it doesn't have to be inside the try block. The only thing I'm worried is if there is some way(I'm not Java certified) that an exception or error can be thrown between operations? Even simpler example: Inputstream in = new FileInputStream("myfile.txt"); in.close(); Is there any way the stream would be left open that a try-finally block would prevent?

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Why I can't get all UDP packets?

    - by Jack
    My program use UdpClient to try to receive 27 responses from 27 hosts. The size of the response is 10KB. My broadband incoming bandwidth is 150KB/s. The 27 responses are sent from the hosts almost at the same time and for every 10 secs. However, I can only receive 8 - 17 responses each time. The number of responses that I can receive is quite dynamic but within the range. Can anyone tell me why? why can't I receive all? I understand UDP is not reliable. but I tried receiving 5 - 10 responses at the same time, it worked. I guess the network links are not so bad. The code is very simple. ON the 27 hosts, I just use UdpClient to send 10KB to my machine. On my machine, I have one UdpClient receive datagrams. Each time I get a data, I create a thread to handle it (basically handling it means just print out "I received 10KB", but it runs in a thread). listener = new UDPListener(Port); listener.Start(); while (true) { try { UDPContext context = listener.Accept(); ThreadPool.QueueUserWorkItem(new WaitCallback(HandleMessage), context); } catch (Exception) { } } If I reduce the size of the response down to 3KB, the case gets much better that roughly 25 responses can be received. Any more idea? UDP buffer problems???

    Read the article

  • C++ Array vs vector

    - by blue_river
    when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds. Why so much performance difference? int _tmain(int argc, _TCHAR* argv[]) { const int size = 10000; clock_t start, end; start = clock(); vector<int> v(size*size); for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { v[i*size+j] = 1; } } end = clock(); cout<< (end - start) <<" milliseconds."<<endl; // 718 milliseconds int f = 0; start = clock(); int arr[size*size]; for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { arr[i*size+j] = 1; } } end = clock(); cout<< ( end - start) <<" milliseconds."<<endl; // 0 milliseconds return 0; }

    Read the article

  • how to diffrentiate between same field names of two tables in a select query??

    - by developer
    i have more than two tables in my database and all of them contains same field names like table A table B table C field1 field1 field1 field2 field2 field2 field3 field3 field3 . . . . . . . . . . . . I have to write a SELECT query which gets almost all same fields from these 3 tables.Iam using something like this :- select a.field1,a.field2,a.field3,b.field1,b.field2,b.field3,c.field1,c.field2,c.field3 from table A as a, table B as b,table C as c where so and so. but when i print field1's value it gives me the last table values. How can i get all the values of three tables with the same field names??? do i have to write individual query for every table OR there is any ways of fetching them all in a single query????

    Read the article

  • Strange results while measuring delta time on Linux

    - by pachanga
    Folks, could you please explain why I'm getting very strange results from time to time using the the following code: #include <unistd.h> #include <sys/time.h> #include <time.h> #include <stdio.h> int main() { struct timeval start, end; long mtime, seconds, useconds; while(1) { gettimeofday(&start, NULL); usleep(2000); gettimeofday(&end, NULL); seconds = end.tv_sec - start.tv_sec; useconds = end.tv_usec - start.tv_usec; mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5; if(mtime > 10) printf("WTF: %ld\n", mtime); } return 0; } (You can compile and run it with: gcc test.c -o out -lrt && ./out) What I'm experiencing is sporadic big values of mtime variable almost every second or even more often, e.g: $ gcc test.c -o out -lrt && ./out WTF: 14 WTF: 11 WTF: 11 WTF: 11 WTF: 14 WTF: 13 WTF: 13 WTF: 11 WTF: 16 How can this be possible? Is it OS to blame? Does it do too much context switching? But my box is idle( load average: 0.02, 0.02, 0.3). Here is my Linux kernel version: $ uname -a Linux kurluka 2.6.31-21-generic #59-Ubuntu SMP Wed Mar 24 07:28:56 UTC 2010 i686 GNU/Linux

    Read the article

  • return Queryable<T> or List<T> in a Repository<T>

    - by Danny Chen
    Currently I'm building an windows application using sqlite. In the data base there is a table say User, and in my code there is a Repository<User> and a UserManager. I think it's a very common design. In the repository there is a List method: //Repository<User> class public List<User> List(where, orderby, topN parameters and etc) { //query and return } This brings a problem, if I want to do something complex in UserManager.cs: //UserManager.cs public List<User> ListUsersWithBankAccounts() { var userRep = new UserRepository(); var bankRep = new BankAccountRepository(); var result = //do something complex, say "I want the users live in NY //and have at least two bank accounts in the system } You can see, returning List<User> brings performance issue, becuase the query is executed earlier than expected. Now I need to change it to something like a IQueryable<T>: //Repository<User> class public TableQuery<User> List(where, orderby, topN parameters and etc) { //query and return } TableQuery<T> is part of the sqlite driver, which is almost equals to IQueryable<T> in EF, which provides a query and won't execute it immediately. But now the problem is: in UserManager.cs, it doesn't know what is a TableQuery<T>, I need to add new reference and import namespaces like using SQLite.Query in the business layer project. It really brings bad code feeling. Why should my business layer know the details of the database? why should the business layer know what's SQLite? What's the correct design then?

    Read the article

  • gcc compilations (sometimes) result in cpu underload

    - by confusedCoder
    I have a larger C++ program which starts out by reading thousands of small text files into memory and storing data in stl containers. This takes about a minute. Periodically, a compilation will exhibit behavior where that initial part of the program will run at about 22-23% CPU load. Once that step is over, it goes back to ~100% CPU. It is more likely to happen with O2 flag turned on but not consistently. It happens even less often with the -p flag which makes it almost impossible to profile. I did capture it once but the gprof output wasn't helpful - everything runs with the same relative speed just at low cpu usage. I am quite certain that this has nothing to do with multiple cores. I do have a quad-core cpu, and most of the code is multi-threaded, but I tested this issue running a single thread. Also, when I run the problematic step in multiple threads, each thread only runs at ~20% CPU. I apologize ahead of time for the vagueness of the question but I have run out of ideas as to how to troubleshoot it further, so any hints might be helpful. UPDATE: Just to make sure it's clear, the problematic part of the code does sometimes (~30-40% of the compilations) run at 100% CPU, so it's hard to buy the (otherwise reasonable) argument that I/O is the bottleneck

    Read the article

  • JavaScript Keycode 46 is DEL Function key or (.) period sign?

    - by Omar
    Im writing some logic in JavaScript using jquery, where i must check the input content against a REGEX pattern ex: "^[a-zA-Z0-9_]*$" //Alpha-numeric and _ The logic is almost done, i just have a little problem filtering the function key DEL, my logic goes like this: var FunctionsKey = new Array(8, 9, 13, 16, 35, 36, 37, 39, 46); function keypressValidation(key) { if (config.regexExp != null) { if ($.inArray(key, FunctionsKey) != -1) { return true; } else { var keyChar = String.fromCharCode(key); return RegexCheck(keyChar); } } return true; } If the KeyCode is one of those in the array, i let it pass, if not i get the char and compare it against the REGEX. The problem is: in some Browsers the DEL and '.' (period sign) have the same key Code 46. So is there a better logic to filter the function keys or must i write a condition for that case, maybe removing the 46 from the array and try to convert it to char and if is (.) let it go to the Regex function if not let it pass? The other question will be are there more shared Key Codes in some browsers? EDIT: My suggested solution wont work because it doesn't matter which key the user pressed (DEL or period) i always get (.) as CHAR at least on OPERA and FF =(.

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >