Search Results

Search found 6805 results on 273 pages for 'fast formula'.

Page 238/273 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • Why is DivMod Limited to Words (<=65535)?

    - by Andreas Rejbrand
    In Delphi, the declaration of the DivMod function is procedure DivMod(Dividend: Cardinal; Divisor: Word; var Result, Remainder: Word); Thus, the divisor, result, and remainder cannot be grater than 65535, a rather severe limitation. Why is this? Why couldn't the delcaration be procedure DivMod(Dividend: Cardinal; Divisor: Cardinal; var Result, Remainder: Cardinal); The procedure is implemented using assembly, and is therefore probably extremely fast. Would it not be possible for the code PUSH EBX MOV EBX,EDX MOV EDX,EAX SHR EDX,16 DIV BX MOV EBX,Remainder MOV [ECX],AX MOV [EBX],DX POP EBX to be adapted to cardinals? How much slower is the naïve attempt procedure DivModInt(const Dividend: integer; const Divisor: integer; out result: integer; out remainder: integer); begin result := Dividend div Divisor; remainder := Dividend mod Divisor; end; that is not (?) limited to 16-bit integers?

    Read the article

  • Fasted way to develop data entry screens for a .NET backend ?

    - by jay23
    I am a .NET / C# back end guy. I am working on a app that will have about 200 different data entry screens. For me exposing DTO as a collection for CRUD (IUpdatable and IQueryable) is the easy part, can do it in sleep :-). What I am trying to decide is what type of front end technology will allow me to develop these data entry screens fast. They don't have to be fancy but they are not just plain grid either and on average they have about 15 form fields and some client side data validation (no db look up) Options I am looking at are Use ExtJS on the front and REST / JSON on the back. ASP.NET RIA but I do not know SL (Well XAML) Plain ASP.NET / MVC One idea I had was the DTO will contain the meta data about the form (As Attributes) and the form can be dynamically generated, but i do not want to reinvent the wheel if their is an easy way. I have looked at RAD software but all of them look at the DB and generate screens. I rather want some thing that can look at my DTO and generate screens. Jay

    Read the article

  • optimized search using ajax and keypress

    - by ooo
    i have the following code as i want to search a database as a user is typing into a textbox. This code below works fine but it seems a little inefficient as if a user is typing really fast, i am potentially doing many more searches than necessary. So if a user is typing in "sailing" i am searching on "sail", "saili", "sailin", and "sailing" i wanted to see if there was a way to detect any particular time between keypresses so only search if user stops typing for 500 milliseconds or something like this. is there a best practices for something like this? $('#searchString').keypress(function(e) { if (e.keyCode == 13) { var url = '/Tracker/Search/' + $("#searchString").val(); $.get(url, function(data) { $('div#results').html(data); $('#results').show(); }); } else { var existingString = $("#searchString").val(); if (existingString.length > 2) { var url = '/Tracker/Search/' + existingString; $.get(url, function(data) { $('div#results').html(data); $('#results').show(); }); } }

    Read the article

  • How would I compare two Lists(Of <CustomClass>) in VB?

    - by Kumba
    I'm working on implementing the equality operator = for a custom class of mine. The class has one property, Value, which is itself a List(Of OtherClass), where OtherClass is yet another custom class in my project. I've already implemented the IComparer, IComparable, IEqualityComparer, and IEquatable interfaces, the operators =, <>, bool and not, and overriden Equals and GetHashCode for OtherClass. This should give me all the tools I need to compare these objects, and various tests comparing two singular instances of these objects so far checks out. However, I'm not sure how to approach this when they are in a List. I don't care about the list order. Given: Dim x As New List(Of OtherClass) From {New OtherClass("foo"), New OtherClass("bar"), New OtherClass("baz")} Dim y As New List(Of OtherClass) From {New OtherClass("baz"), New OtherClass("foo"), New OtherClass("bar")} Then (x = y).ToString should print out True. I need to compare the same (not distinct) set of objects in this list. The list shouldn't support dupes of OtherClass, but I'll have to figure out how to add that in later as an exception. Not interested in using LINQ. It looks nice, but in the few examples I've played with, adds a performance overhead in that bugs me. Loops are ugly, but they are fast :) A straight code answer is fine, but I'd like to understand the logic needed for such a comparison as well. I'm probably going to have to implement said logic more than a few times down the road.

    Read the article

  • Listening UDP or switch to TCP in a MFC application

    - by Alexander.S
    I'm editing a legacy MFC application, and I have to add some basic network functionalities. The operating side has to receive a simple instruction (numbers 1,2,3,4...) and do something based on that. The clients wants the latency to be as fast as possible, so naturally I decided to use datagrams (UDP). But reading all sorts of resources left me bugged. I cannot listen to UDP sockets (CAsyncSocket) in MFC, it's only possible to call Receive which blocks and waits. Blocking the UI isn't really a smart. So I guess I could use some threading technique, but since I'm not all that experienced with MFC how should that be implemented? The other part of the question is should I do this, or revert to TCP, considering reliability and implementation issues. I know that UDP is unreliable, but just how unreliable is it really? I read that it is up to 50% faster, which is a lot for me. References I used: http://msdn.microsoft.com/en-us/library/09dd1ycd(v=vs.80).aspx

    Read the article

  • Can I share code & resources between Android projects without using a library?

    - by Tom
    The standard advice for sharing code & resources between Android projects is to use a library. Personally I find this works poorly if (a) the shared code changes a lot, or (b) your computer isn't fast enough. I also don't want to get into deploying multiple APK's, which seems to be necessary when I use dependent projects (i.e. Java Build Path, Projects Tab). On the other hand, sharing a folder of source code by using the Eclipse linked source feature works great (Java Build Path, Source tab, Link Source button), but for these two issues: 1) I can't use the same technique to share resources. I can create the link to the resources parent folder but then things get wonky and the shared resources don't get compiled (I'm using ADT 21). 2) So then I settle for copying the shared resources into each project, but this doesn't work because either. The shared code can't import the copy of its resources because it doesn't know the package name of the project that uses it. The solution I've been using is to access the resources dynamically, but that has become cumbersome as the number of resources grows. So, I need a solution to either (1) or (2), or I'll have to go back to a library project. (Or maybe there is another option I haven't thought of?)

    Read the article

  • problem with a very simple tile based game

    - by newbieguy
    Hello, I am trying to create a pacman-like game. I have an array that looks like this: array: 1111111111111 1000000000001 1111110111111 1000000000001 1111111111111 1 = Wall, 0 = Empty space I use this array to draw tiles that are 16x16 in size. The Game character is 32x32. Initially I represented the character's position in array indexes, [1,1] etc. I would update his position if array[character.new_y][charater.new_x] == 0 Then I translated these array coordinates to pixels, [y*16, x*16] to draw him. He was lining up nicely, wouldn't go into walls, but I noticed that since I was updating him by 16 pixels each, he was moving very fast. I decided to do it in reverse, to store the game character's position in pixels instead, so that he could use less than 16 pixels per move. I thought that a simple if statement such as this: if array[(character.new_pixel_y)/16][(character.new_pixel_x)/16] == 0 would prevent him from going into walls, but unfortunately he eats a bit of the bottom and right side walls. Any ideas how would I properly translate pixel position to the array indexes? I guess this is something simple, but I really can't figure it out :(

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • getting string.substring(N) not to choke when N > string.length

    - by aape
    I'm writing some code that takes a report from the mainframe and converts it to a spreadsheet. They can't edit the code on the MF to give me a delimited file, so I'm stuck dealing with it as fixed width. It's working okay now, but I need to get it more stable before I release it for testing. My problem is that in any given line of data, say it could have three columns of numbers, each five chars wide at positions 10, 16, and 22. If on this one particular row, there's no data for the last two cols, it won't be padded with spaces; rather, the length of the string will be only 14. So, I can't just blindly have dim s as string = someStream.readline a = s.substring(10, 5) b = s.substring(16, 5) c = s.substring(22, 5) because it'll choke when it substrings past the length of the string. I know I could test the length of the string before processing each row, and I have automated the filling of some of the vsariables using a counter and a loop, and using the counter*theWidthOfTheGivenVariable to jump around, but this project was a dog to start with (come on! turning a report into a spreadsheet?), but there are many different types of rows (it's not just a grid), and the code's getting ugly fast. I'd like this to be clean, clear, and maintainable for the poor sucker that gets this after me. If it matters, here's my code so far (it's really crufty at the moment). You can see some of my/its idiocy in the processSection#data subs So, I'm wondering 1) is there a way baked in to .NET to have string.substring not error when reading past the end of a string without wrapping it in a try...catch? and 2) would it be appropriate in this situation to write a new string class that inherits from string that has a more friendly substring function in it? ETA: Thanks for all the advice and knowledge everyone. I'll go with the extension. Hopefully one of these years, I'll get my chops up enough to pay someone back in kind. :)

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

  • <input type="file"> reads only file name not full path

    - by Deep
    I am using Glassfish Server.I have seen the apache file upload to solve it...but i want to implement it in glassfish server. image.html <form action="" method="post" enctype="multipart/form-data"> Select a file: <input type="file" name="first" id="first"/> <br /> <input type="button" name="button" value="upload" id="button" /> <p id="test"></p> <img src='Unknown.png' id="profile_img" height="200px" width="150px"/> </form> test.js $(document).ready(function() { var filepath= $("#first"); $('#button').click(function() { $.ajax({ type: "post", url: "imageservlet", data: "user="+filepath.val(), success: function(msg) { $("#profile_img").attr('src',msg); $("#test").html(msg) .fadeIn("fast"); } }); }); }); imageservlet.java String user=request.getParameter("user"); out.print(user); the output is file name not full path.

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Worth website that a web developer must surf daily?

    - by I Like PHP
    Hello All, this may be not a right place to ask this question, but i can get best answer only from here, so i m posting here. i m a web developer and working with technology PHP,MySQL, JavaScript,jQuery,AJAX, CSS, HTML, JSON i daily surf few websites regarding web development , i know there are a lot of website with very good knowledge but we are not aware of that so i think we have to share with each other.i m mentioning some useful links that we must surf daily for gaining knowledge and do better/fast development. please you also suggest some good links which you surf regularly and best in their field. i surf belows links regularly - [Stack Overflow][1] // No doubt, it is best - [Delicious][2]// best social bookmarking website - [Smashing Magazine][3] // Best site to improve knowledege for a web developer - [Net tuts][4] // Best tutorail wesbsite with full explanation - [Official PHP site ][5] // i think nothing to mention about it( just superb) - [Javascript Debugger][6] // U can filter your javascript/jquery code here - [jQuery Official site][7] // best to learn jQuery i m waiting for you great response. i also need any good and trusted website on mysql, i think mysql officail website is very confusing, i had to search a lot to find a single thing, if u have any good regarding mysql then share please. Thanks alwayz.

    Read the article

  • Coldfusion 8 and HTTP PUT - is there a way to PUT an object?

    - by ciaranarcher
    Hi all We are using EHCache with CF 8 to cache stuff on a central server using a RESTful interface over HTTP. I am trying to cache a cfquery object to the cache server. I can get this to work if I call EHCache direct (i.e. store it in a local cache) but if I try to cache on a remote server over HTTP I am running into problems. The code I am using is as follows: <cfhttp url="http://localhost:8080/myCache/myKey" method="put" result="r" timeout="2" throwonerror="true" > <cfhttpparam type="body" value="#ARGUMENTS.item#" /> </cfhttp> CF doesn't like this reference to #ARGUMENTS.item# and it complains Complex object types cannot be converted to simple values. Can anyone give me an example of how to put an object over http using CF? If this is not possible with CF then a Java example would be the next best thing. Many thanks in advance! PS: I do not want to use serialization to text/JSON etc. as this approach has issues with data integrity and most importantly it's not fast enough.

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • Windows Azure Worldwide availability

    - by Insomniac
    Hi, I've been reviewing Windows Azure platform for some time, and can't find answer to one very important question. If I deploy my application within a cloud, how it will be reached from different places worldwide? For example if I have a web application with a database and want it to be accessible to users in UK, US, China and etc. Can I be sure that any user in the world will get almost the same request processing time? I think of it this way. 1. User sends request (navigates in browser to my web site) 2. This request gets in a cloud in a nearest location (closest to user MS Data Center?) 3. It is processed by an instance of my web application (in nearest location, with request to my centralized DB which can be far away but SQL request goes via MS internal network, which I believe should be very fast). 4. Response sent to user. Please let me know if I'm wrong. Thanks.

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

  • I want to build a Google-friendly web app, where should I start?

    - by ronii
    I have only very basic experience with HTML/CSS and have quite a bit of experience with testing software and web apps from a consumer perspective. I'd love to launch a web application that plays nicely with Google services, similar to some of the apps you'd find on the Google Apps Marketplace, such as ManyMoon, time to note, Socialwok, etc. I'm a huge Google fan and would like to build something that's well integrated with other Google services. If you were a total beginner and wanted to build a complex app like one of examples above (project management, CRM, etc), where would you start? If you worked your ass off 18 hours a day, 24/7, how fast could you do it? I've dabbled into various languages and development frameworks, and read about which apps are using what languages but it's hard to figure out what would be most beneficial to jump into. Ruby on Rails, PHP, Google Web Toolkit, AppEngine. The list goes on and on. I want to be able to build and launch my own scalable web app. Thanks.

    Read the article

  • Optimization of a c++ matrix/bitmap class

    - by Andrew
    I am searching a 2D matrix (or bitmap) class which is flexible but also fast element access. The contents A flexible class should allow you to choose dimensions during runtime, and would look something like this (simplified): class Matrix { public: Matrix(int w, int h) : data(new int[x*y]), width(w) {} void SetElement(int x, int y, int val) { data[x+y*width] = val; } // ... private: // symbols int width; int* data; }; A faster often proposed solution using templates is (simplified): template <int W, int H> class TMatrix { TMatrix() data(new int[W*H]) {} void SetElement(int x, int y, int val) { data[x+y*W] = val; } private: int* data; }; This is faster as the width can be "inlined" in the code. The first solution does not do this. However this is not very flexible anymore, as you can't change the size anymore at runtime. So my question is: Is there a possibility to tell the compiler to generate faster code (like when using the template solution), when the size in the code is fixed and generate flexible code when its runtime dependend? I tried to achieve this by writing "const" where ever possible. I tried it with gcc and VS2005, but no success. This kind of optimization would be useful for many other similar cases.

    Read the article

  • sql queries slower than expected

    - by neubert
    Before I show the query here are the relevant table definitions: CREATE TABLE phpbb_posts ( topic_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL, poster_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL, KEY topic_id (topic_id), KEY poster_id (poster_id), ); CREATE TABLE phpbb_topics ( topic_id mediumint(8) UNSIGNED NOT NULL auto_increment ); Here's the query I'm trying to do: SELECT p.topic_id, p.poster_id FROM phpbb_topics AS t LEFT JOIN phpbb_posts AS p ON p.topic_id = t.topic_id AND p.poster_id <> ... WHERE p.poster_id IS NULL; Basically, the query is an attempt to find all topics where the number of times someone other than the target user has posted in is zero. In other words, the topics where the only person who has posted is the target user. Problem is that query is taking a super long time. My general assumption when it comes to SQL is that JOINs of any are super fast and can be done in no time at all assuming all relevant columns are primary or foreign keys (which in this case they are). I tried out a few other queries: SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id; That returns 353340 pretty quickly. I then do these: SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id AND p.poster_id <> 77198; SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id WHERE p.poster_id <> 77198; And both of those take quite a while (between 15-30 seconds). If I change the < to a = it takes no time at all. Am I making some incorrect assumptions? Maybe my DB is just foobar'd?

    Read the article

  • jquery an div tags used in a ordered list

    - by shan2on
    hello everyone, i'm not sure where my errors are lying but here is my scenario... i have an un-ordered list in html that handles my main link bar... <ul id="navlist"> <li><a href="/" id="current">home</a></li> <li><a href="#" id="search">search</a></li> <li><a href="#" id="users">login</a></li> </ul> and what i want done is to call each id and have jQuery handle each and dislpay it where specified in my css file... for now, lets just ignore my "home" link... here is a snippet of my inline jquery $(document).ready(function() { $("a").click(function () { $("div#login").slideToggle("fast"); }); }) Now, whichever div i specify, it calls that ... however it will only call one div, using either from the un-orderd list, as mentioned above in the html. My Goal is to call a search bar when search is clicked, and to call a login box when login is clicked. to my knowledge (noob), i believe use an as a class, however that is what brings me here. i believe my error is in my jquery and not my css, as stated above, either instance will be displayed when the jquery function calls either tag. any help is greatly appreciated. thanks in advance shan2on

    Read the article

  • Parallelize or vectorize all-against-all operation on a large number of matrices?

    - by reve_etrange
    I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm. In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so list = who('data_matrix_prefix*') H = cell(numel(list),numel(list)); for i=1:numel(list) for j=1:numel(list) if i ~= j eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']); end end end is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s). However, operating on all the data requires almost 25 million calls. I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop: # who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data. nextData = cell(k,1); for i=1:k [nextData{:}] = deal(data{i}); H{:,i} = cellfun(@compare,data,nextData,'UniformOutput',false); end Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced. compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?). Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?

    Read the article

  • Are reads and (transactional) writes faster for entities of the same group than otherwise?

    - by indiehacker
    What advantage is there to designing child-parent relationships, which allow us to do writes in transactions, when there is never a real concern for consistency and contention and those sort of more complex issues? Does it make writes and reads faster? Consider my situation where there are many .png images that are referenced to one mosaic layer, and these .png images are written just once by a single user. The user can design many mosaic layers and her mosaic layers and referenced image entities are never changed/updated, they are just deleted some time in the future. Other users can come to the web project site and interactively view the mosaic layer as different layouts/configurations of the images as they play (query) with different criteria. So reads should be very fast. So there is no real worry of contention, or users conflicting with one another with writing new image entities. And because of that I am assuming there is no "requirement" for the .png image entities to be grouped by their same mosaic layer in child-parent relationship. However, perhaps, since the documentation says they are stored close to one another, if the many image entities were grouped as children to a single mosaic layer parent than this has the advantage that the writing (in transaction) and reading will happen much faster?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >