Search Results

Search found 665 results on 27 pages for 'problematic'.

Page 19/27 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to debug/break in codedom compiled code

    - by Jason Coyne
    I have an application which loads up c# source files dynamically and runs them as plugins. When I am running the main application in debug mode, is it possible to debug into the dynamic assembly? Obviously setting breakpoints is problematic, since the source is not part of the original project, but should I be able to step into, or break on exceptions for the code? Is there a way to get codedom to generate PDBs for this or something? Here is the code I am using for dynamic compliation. CSharpCodeProvider codeProvider = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); //codeProvider. ICodeCompiler icc = codeProvider.CreateCompiler(); CompilerParameters parameters = new CompilerParameters(); parameters.GenerateExecutable = false; parameters.GenerateInMemory = true; parameters.CompilerOptions = string.Format("/lib:\"{0}\"", Application.StartupPath); parameters.ReferencedAssemblies.Add("System.dll"); parameters.ReferencedAssemblies.Add("System.Core.dll"); CompilerResults results = icc.CompileAssemblyFromSource(parameters, Source); DLL.CreateInstance(t.FullName, false, BindingFlags.Default, null, new object[] { engine }, null, null);

    Read the article

  • "Downloading" a computed value form JavaScript

    - by Travis Jensen
    I'm hoping you can prove me wrong here (please, please, please! ;). I have a situation where I need to download encrypted data from a Server D (for "Data"). Server K (for "Key") has the encryption key. For security sake, I would really prefer that Server D never know the key that Server K knows. What I want is my client (e.g. your browser) to connect to Server D for the data and Server K for the key and doe the decryption locally so the unencrypted stuff never leaves your computer. I can do this fine for text areas in the dom by replacing the contents of the HTML. However, sometimes, I would like to do larger files that I stream to the file system. For instance, perhaps I want to encrypt a movie and decrypt it and stream the contents to the my video player. I am not a JavaScript guru by any stretch, especially when it comes to the edge cases of things like the security sandbox. For Small D, I can handle the decryption, but I don't know how to save the decrypted file. Large D seems problematic as memory runs out. Anybody have any ideas that don't involve native plugins? Thanks!

    Read the article

  • pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • Fetching Core Data for Tableview on iPhone - Tutorial leaves me with crashes when adding items to a

    - by Gordon Fontenot
    Been following the Core Data tutorial on Apple's developer site, and all is good until I have to add something to the fetched store. I am getting this error after a successful build and load when I try to add a new item to the list: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). Due to the fact that the fetch goes through fine, and that if I replace the fetching with eventList = [[NSMutableArray alloc] init] it works as expected (without persistance, of course), I am led to believe that the problem comes from not creating the Mutable Array correctly. Here's the problematic part of the code: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:managedObjectContext]; [request setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"creationDate" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError *error; NSMutableArray *mutableFetchResults = [[managedObjectContext executeFetchRequest:request error:&error] mutableCopy]; if (mutableFetchResults = nil) { //Handle the error } [self setEventList:mutableFetchResults]; [mutableFetchResults release]; [request release]; I have tried switching the NSArrays in the second chunk out with NSMutableArrays, but I still get the same error. For reference, the section of code that is throwing the error when I try adding an entry is here: [eventList insertObject:event atIndex:0]; NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [self.tableView scrollToRowAtIndexPath:[NSIndexPath indexPathForRow:0 inSection:0] atScrollPosition:UITableViewScrollPositionTop animated:YES]; it errors out at the insertRowsAtIndexPaths call. Thanks in advance for any help

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • Closures and universal quantification

    - by Apocalisp
    I've been trying to work out how to implement Church-encoded data types in Scala. It seems that it requires rank-n types since you would need a first-class const function of type forAll a. a -> (forAll b. b -> b). However, I was able to encode pairs thusly: import scalaz._ trait Compose[F[_],G[_]] { type Apply = F[G[A]] } trait Closure[F[_],G[_]] { def apply[B](f: F[B]): G[B] } def pair[A,B](a: A, b: B) = new Closure[Compose[PartialApply1Of2[Function1,A]#Apply, PartialApply1Of2[Function1,B]#Apply]#Apply, Identity] { def apply[C](f: A => B => C) = f(a)(b) } For lists, I was able to get encode cons: def cons[A](x: A) = { type T[B] = B => (A => B => B) => B new Closure[T,T] { def apply[B](xs: T[B]) = (b: B) => (f: A => B => B) => f(x)(xs(b)(f)) } } However, the empty list is more problematic and I've not been able to get the Scala compiler to unify the types. Can you define nil, so that, given the definition above, the following compiles? cons(1)(cons(2)(cons(3)(nil)))

    Read the article

  • What if you used the wrong language?

    - by HS
    A reply to another question made me remember a project from some years ago when it turned out that Java was not the right tool to use. I typically only learn a new language when I have a problem that it solves better than the ones I already know. [...] Then I write whatever program I wanted to learn that language for in the first place. [...] By the time I've gotten my target program written, I've usually got a decent handle on the language, not to mention any other features it has, and I've got other ideas to use it for. I did just that back then with Java, because the client thought it to be the right language to use (platform independent) and initial evaluation confirmed that. However, much later in the project there were some issue (can't really remember all the details by now). So, the project that started as a nice learning experience turned into a nightmare toward the end. I was at the brink of switching over to my trusted C++ and doing a complete rewrite. The client was not so much of a problem to convince back then, but my supervisor was strongly opposed because of all the work that already went into the Java version. In hindsight, he was right and the project was complete more or less with the intended features kind of working, but it was the project that I am least proud of by now. Long story short: what do you think, when is it too much and the switch to another technology is worthwhile? I personally would estimate the point of no return to be around 50% of the planned effort, but really want to know, if anyone has real experience with such a switch. And to answer the inevitable question: I do not really care, if the technology switched to is proven or another new thing. The latter would basically need more initial scrutiny based on the past experiences in the problematic project.

    Read the article

  • Should I use two queries, or is there a way to JOIN this in MySQL/PHP?

    - by Jack W-H
    Morning y'all! Basically, I'm using a table to store my main data - called 'Code' - a table called 'Tags' to store the tags for each code entry, and a table called 'code_tags' to intersect it. There's also a table called 'users' which stores information about the users who submitted each bit of code. On my homepage, I want 5 results returned from the database. Each returned result needs to list the code's title, summary, and then fetch the author's firstname based on the ID of the person who submitted it. I've managed to achieve this much so far (woot!). My problem lies when I try to collect all the tags as well. At the moment this is a pretty big query and it's scaring me a little. Here's my problematic query: SELECT code.*, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id ORDER BY date DESC LIMIT 0, 5 What it returns is correct looking data, but several repeated rows for each tag. So for example if a Code entry has 3 tags, it will return an identical row 3 times - except in each of the three returned rows, the tag changes. Does that make sense? How would I go about changing this? Thanks! Jack

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • Appending facts into an existing prolog file.

    - by vuj
    Hi, I'm having trouble inserting facts into an existing prolog file, without overwriting the original contents. Suppose I have a file test.pl: :- dynamic born/2. born(john,london). born(tim,manchester). If I load this in prolog, and I assert more facts: | ?- assert(born(laura,kent)). yes I'm aware I can save this by doing: |?- tell('test.pl'),listing(born/2),told. Which works but test.pl now only contains the facts, not the ":- dynamic born/2": born(john,london). born(tim,manchester). born(laura,kent). This is problematic because if I reload this file, I won't be able to insert anymore facts into test.pl because ":- dynamic born/2." doesn't exist anymore. I read somewhere that, I could do: append('test.pl'),listing(born/2),told. which should just append to the end of the file, however, I get the following error: ! Existence error in user:append/1 ! procedure user:append/1 does not exist ! goal: user:append('test.pl') Btw, I'm using Sicstus prolog. Does this make a difference? Thanks!

    Read the article

  • Altering an embedded truetype font so it will be useable by Windows GDI

    - by Ritsaert Hornstra
    I am trying to render PDF content to a GDI device context (a 24bit bitmap to be exact). Parsing the PDF stream into PDF objects and rendering the PDF commands from the content dictionary works well, inclduing font rendering. Embedded fonts are decompressed from their FontFile streams and "loaded" using AddFontMemResourceEx. Now some embedded fonts remove some TrueType tables that are needed by GDI, like the NAME table. Because of this, I tried to modify the font by parsing the TrueType subset font into it's tables and modify those tables that have data missing / missing tables are regenerated with as correct information as possible. I use the Microsoft Font Validator tool to see how "correct" the generated font is. I still get a few errors, like for the maxp table the max values are usually too large (it is a subset) or The xAvgCharWidth field does not equal the calculated value of the OS/2 table is not correct but this does not stop other embedded fonts to be useable.The fonts embedded using PDFCreator are the ones that are problematic. Question: - How can I determine what I need to change to the font file in order for GDI to be able to use it? - Are there any other font validation tools that might give me insight into what is still wrong with the fontfile? If needed: I can make an original fontfile and an altered fontfile available for download somewhere.

    Read the article

  • mysql_real_escape_string() just makes an empty string?

    - by James P
    I am using a jQuery AJAX request to a page called like.php that connects to my database and inserts a row. This is the like.php code: <?php // Some config stuff define(DB_HOST, 'localhost'); define(DB_USER, 'root'); define(DB_PASS, ''); define(DB_NAME, 'quicklike'); $link = mysql_connect(DB_HOST, DB_USER, DB_PASS) or die('ERROR: ' . mysql_error()); $sel = mysql_select_db(DB_NAME, $link) or die('ERROR: ' . mysql_error()); $likeMsg = mysql_real_escape_string(trim($_POST['likeMsg'])); $timeStamp = time(); if(empty($likeMsg)) die('ERROR: Message is empty'); $sql = "INSERT INTO `likes` (like_message, timestamp) VALUES ('$likeMsg', $timeStamp)"; $result = mysql_query($sql, $link) or die('ERROR: ' . mysql_error()); echo mysql_insert_id(); mysql_close($link); ?> The problematic line is $likeMsg = mysql_real_escape_string(trim($_POST['likeMsg']));. It seems to just return an empty string, and in my database under the like_message column all I see is blank entries. If I remove mysql_real_escape_string() though, it works fine. Here's my jQuery code if it helps. $('#like').bind('keydown', function(e) { if(e.keyCode == 13) { var likeMessage = $('#changer p').html(); if(likeMessage) { $.ajax({ cache: false, url: 'like.php', type: 'POST', data: { likeMsg: likeMessage }, success: function(data) { $('#like').unbind(); writeLikeButton(data); } }); } else { $('#button_container').html(''); } } }); All this jQuery code works fine, I've tested it myself independently. Any help is greatly appreciated, thanks.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Accepting more simultaneous keyboard inputs

    - by unknownthreat
    Sometimes, a normal computer keyboard will only accept user's inputs up to a certain key simultaneously. I got a logitech keyboard that can accept up to 3-4 key presses at the same time. The computer does not accept any more input if you press more than 4 keys for this keyboard. And it also depends on certain areas of your keyboard as well. Some locations allow more key to be pressed (like the arrow keys), while some locations permit you to press only 1-2 keys. This also differs from keyboard to keyboard as well. Some older keyboards only accept up 1-2 keys. This isn't problematic with usual office work, but when it comes to gaming. For instance, imagine a platform game, where you have to jump, attack, and control direction at the same time. This implies several key presses and some keyboards cannot accept such simultaneous input. However, I've tried this on several games and the amount of possible keyboard inputs seem to be also different. Therefore, we have two issues: Keyboards have different amount of simultaneous inputs. Some games can accept more keyboard inputs than other games. At first, I thought this is hardware only problem, but why do some programs behave differently? Why some programs can accept more keyboard inputs than other programs? So how can we write our programs to accept more keyboard inputs?

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Creating WPF components in background thread

    - by mizipzor
    Im working on a reporting system, a series of DocumentPage are to be created through a DocumentPaginator. These documents include a number of WPF components that are to be instantiated so the paginator includes the correct things when later sent to the XpsDocumentWriter (which in turn is sent to the actual printer). My problem now is that the DocumentPage instances take quite a while to create (enough for Windows to mark the application as frozen) so I tried to create them in a background thread, which is problematic since WPF expects the attributes on them to be set from the GUI thread. I would also like to have a progress bar showing up, indicating how many pages have been created so far. Thus, it looks like Im trying to get two things to happen in parallell on the GUI. The problem is hard to explain and Im really not sure how to tackle it. In short: Create a series of DocumentPage's. These include WPF components These are to be created on a background thread, or use some other trick so the application isnt frozen. After each page is created, a WPF ProgressBar should be updated. If there is no decent way to do this, alternate solutions and approaches are more than welcome.

    Read the article

  • Matab - Trace contour line between two different points

    - by Graham
    Hi, I have a set of points represented as a 2 row by n column matrix. These points make up a connected boundary or edge. I require a function that traces this contour from a start point P1 and stop at an end point P2. It also needs to be able trace the contour in a clockwise or anti-clockwise direction. I was wondering if this can be achieved by using some of matlabs functions. I have tried to write my own function but this was riddled with bugs and I have also tried using bwtraceboundary and indexing however this has problematic results as the points within the matrix are not in the order that create the contour. Thank you in advance for any help. Btw, I have included a link to a plot of the set of points. It is half the outline of a hand. The function would ideally trace the contour from ether the red star to the green triangle. Returning the points in order of traversal.

    Read the article

  • Is it safe to read regular expressions from a file?

    - by Zilk
    Assuming a Perl script that allows users to specify several text filter expressions in a config file, is there a safe way to let them enter regular expressions as well, without the possibility of unintended side effects or code execution? Without actually parsing the regexes and checking them for problematic constructs, that is. There won't be any substitution, only matching. As an aside, is there a way to test if the specified regex is valid before actually using it? I'd like to issue warnings if something like /foo (bar/ was entered. Thanks, Z. EDIT: Thanks for the very interesting answers. I've since found out that the following dangerous constructs will only be evaluated in regexes if the use re 'eval' pragma is used: (?{code}) (??{code}) ${code} @{code} The default is no re 'eval'; so unless I'm missing something, it should be safe to read regular expressions from a file, with the only check being the eval/catch posted by Axeman. At least I haven't been able to hide anything evil in them in my tests. Thanks again. Z.

    Read the article

  • Preventing HTML character entities in locale files from getting munged by Rails3 xss protection

    - by Chris S
    We're building an app, our first using Rails 3, and we're having to build I18n in from the outset. Being perfectionists, we want real typography to be used in our views: dashes, curled quotes, ellipses et al. This means in our locales/xx.yml files we have two choices: Use real UTF-8 characters inline. Should work, but hard to type, and scares me due to the amount of software which still does naughty things to unicode. Use HTML character entities (&#8217; &#8212; etc). Easier to type, and probably more compatible with misbehaving software. I'd rather take the second option, however the auto-escaping in Rails 3 makes this problematic, as the ampersands in the YAML get auto-converted into character entities themselves, resulting in 'visible' &8217;s in the browser. Obviously this can be worked around by using raw on strings, i.e.: raw t('views.signup.organisation_details') But we're not happy going down the route of globally raw-ing every time we t something as it leaves us open to making an error and producing an XSS hole. We could selectively raw strings which we know contain character entities, but this would be hard to scale, and just feels wrong - besides, a string which contains an entity in one language may not in another. Any suggestions on a clever rails-y way to fix this? Or are we doomed to crap typography, xss holes, hours of wasted effort or all thre?

    Read the article

  • How do I generate a connection reset programatically?

    - by Brock Adams
    Hi, I'm sure you've seen the "the connection was reset" message displayed when trying to browse web pages. (The text is from Firefox, other browsers differ.) I need to generate that message/error/condition on demand, to test workarounds. So, how do I generate that condition programmatically? (How to generate a TCP RST from PHP -- or one of the other web-app languages?) Caveats and Conditions: It cannot be a general IP block. The test client must still be able to see the test server when not triggering the condition. Ideally, it would be done at the web-application level (Python, PHP, Coldfusion, Javascript, etc.). Access to routers is problematic. Access to Apache config is a pain. Ideally, it would be triggered by fetching a specific web-page. Bonus if it works on a standard, commercial web host. Update: Sending RST is not enough to cause this condition. See my partial answer, below. I've a solution that works on a local machine, Now need to get it working on a remote host.

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • Travelling Visual Studio developers

    - by Graphain
    Hi, I am about to travel to Europe (I'm Australian but imagine this is a similar circumstance for US users and simply flipped for European users). However, there is the slim possibility I will need to do some Visual Studio work while I'm travelling. As I see it I have three options: Leave a desktop PC on at home, access remotely via net cafes. Carry a laptop with me on the trip, upload files as required using public wifi. Option 2 but instead buy cheap light netbook that is miraculously capable of running VS. Does anyone have any experience or advice to shed on any of these options? For reference, this existing post suggests that VS remotely for short distances is okay, but over longer distances could be more problematic. I've used VS via RDP to a US server before and it was pretty laggy but for small changes I could get by. Concerns I have that you may have some experience with: Weight of luggage (ideally like to travel light) Security of laptop (imagine it'll be too heavy to carry around all the time so have to leave it at hotel/hostel etc. and hope for the best) Security of data (don't want someone stealing RDP access to my home PC) Security of FTP (don't want someone stealing FTP passwords over wireless)

    Read the article

  • Adventures on Enterprise Library 5.0: Who moved my cheese (namespace)

    - by Junior Mayhé
    Jesus, Krishna, Budda! I've migrated to EntLib 5.0, but classes like ISymmetricCryptoProvider are not recognized anymore. Funny to say that Data, Logging and other blocks are working compiling fine. Here's the problematic class: using System; using System.Collections.Generic; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;//-->it's not working anymore using Microsoft.Practices.EnterpriseLibrary.Security.Cryptography;//-->it's not working anymore namespace MyClassLibrary.Security.EnterpriseLibrary { public sealed class Crypto { public static ISymmetricCryptoProvider MyProvider { get { //IConfigurationSource is not recognized either, neither SystemConfigurationSource IConfigurationSource cs = new SystemConfigurationSource(); SymmetricCryptoProviderFactory scpf = new SymmetricCryptoProviderFactory(cs); ISymmetricCryptoProvider p = scpf.CreateDefault(); return p; } } The references are fine on project too. I really don't know why this particular project it's causing too many trouble on VS2010! Older references were deleted, project was cleaned, rebuilt, but can't make it compile :-( The references are: Microsoft.Practices.EnterpriseLibrary.Common Microsoft.Practices.EnterpriseLibrary.Logging Microsoft.Practices.EnterpriseLibrary.Logging.Database Microsoft.Practices.EnterpriseLibrary.Security Microsoft.Practices.EnterpriseLibrary.Security.Cryptography Why some namespaces can be found while others can't?

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >