Search Results

Search found 9271 results on 371 pages for 'whole foods'.

Page 308/371 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • Is this method a good aproach to get SQL values from C#?

    - by MadBoy
    I have this little method that i use to get stuff from SQL. I either call it with varSearch = "" or varSearch = "something". I would like to know if having method written this way is best or would it be better to split it into two methods (by overloading), or maybe i could somehow parametrize whole WHERE clausule? private void sqlPobierzKontrahentDaneKlienta(ListView varListView, string varSearch) { varListView.BeginUpdate(); varListView.Items.Clear(); string preparedCommand; if (varSearch == "") { preparedCommand = @" SELECT t1.[KlienciID], CASE WHEN t2.[PodmiotRodzaj] = 'Firma' THEN t2.[PodmiotFirmaNazwa] ELSE t2.[PodmiotOsobaNazwisko] + ' ' + t2.[PodmiotOsobaImie] END AS 'Nazwa' FROM [BazaZarzadzanie].[dbo].[Klienci] t1 INNER JOIN [BazaZarzadzanie].[dbo].[Podmioty] t2 ON t1.[PodmiotID] = t2.[PodmiotID] ORDER BY t1.[KlienciID]"; } else { preparedCommand = @" SELECT t1.[KlienciID], CASE WHEN t2.[PodmiotRodzaj] = 'Firma' THEN t2.[PodmiotFirmaNazwa] ELSE t2.[PodmiotOsobaNazwisko] + ' ' + t2.[PodmiotOsobaImie] END AS 'Nazwa' FROM [BazaZarzadzanie].[dbo].[Klienci] t1 INNER JOIN [BazaZarzadzanie].[dbo].[Podmioty] t2 ON t1.[PodmiotID] = t2.[PodmiotID] WHERE t2.[PodmiotOsobaNazwisko] LIKE @searchValue OR t2.[PodmiotFirmaNazwa] LIKE @searchValue OR t2.[PodmiotOsobaImie] LIKE @searchValue ORDER BY t1.[KlienciID]"; } using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetails)) using (SqlCommand sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Parameters.AddWithValue("@searchValue", "%" + varSearch + "%"); using (SqlDataReader sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { string varKontrahenciID = sqlQueryResult["KlienciID"].ToString(); string varKontrahent = sqlQueryResult["Nazwa"].ToString(); ListViewItem item = new ListViewItem(varKontrahenciID, 0); item.SubItems.Add(varKontrahent); varListView.Items.AddRange(new[] {item}); } } } varListView.EndUpdate(); }

    Read the article

  • Cannot locate record in delphi ADO query

    - by Danatela
    I can't locate any record in TADOQuery using PK. First, I was trying to use standard Locate method: PPUQuery.Locate('ID', SpPlansQuery['PPONREC'], []); It always returns False, but manual search (passing the whole query matching ID with given PPONREC which is really slow) finds the desired row. I tried using loPartialKey and switched CursorLocation of query to clUseServer, but it didn't help. Next, I tried to filter my PPUQuery: PPUQuery.Filter := 'ID = ' + VarToStr(SpPlansQuery['PPONREC']); PPUQuery.Filtered := True; PPUQuery.First; But after that the PPUQuery.Eof is True and PPUQuery.RecordCount equals 0. Underlying database is Oracle 9 and the ID is of type INTEGER and is PK of table TPORDER_CMK. PPUQuery.SQL is: SELECT tp.*, la.*, lm.*, ld.*, ld1.*, to_cmk.* FROM ppu_plan.tporder_cmk tp JOIN PPU_PLAN.LARTICLES la ON TP.ARTICLE = LA.ID JOIN PPU_PLAN.LMATERIAL lm ON TP.MATERIAL = lm.id JOIN PPU_PLAN.LCADEP ld ON TP.CADEP = LD.ID JOIN PPU_PLAN.LCADEP ld1 ON TP.PRODUCER = LD1.ID JOIN PPU_PLAN.TORDER_CMK to_cmk ON TP.order_id=TO_cmk.ID WHERE TP.PLAN_ID = :pplan_id What should I try next and how to solve this problem?

    Read the article

  • Java servlet's request parameter's name set to entire json object

    - by Geren White
    I'm sending a json object through ajax to a java servlet. The json object is key-value type with three keys that point to arrays and a key that points to a single string. I build it in javascript like this: var jsonObject = {"arrayOne": arrayOne, "arrayTwo": arrayTwo, "arrayThree": arrThree, "string": stringVar}; I then send it to a java servlet using ajax as follows: httpRequest.open('POST', url, true); httpRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); httpRequest.setRequestHeader("Connection", "close"); var jsonString = jsonObject.toJSONString(); httpRequest.send(jsonString); This will send the string to my servlet, but It isn't showing as I expect it to. The whole json string gets set to the name of one of my request's parameters. So in my servlet if I do request.getParameterNames(); It will return an enumeration with one of the table entries' key's to be the entire object contents. I may be mistaken, but my thought was that it should set each key to a different parameter name. So I should have 4 parameters, arrayOne, arrayTwo, arrayThree, and string. Am I doing something wrong or is my thinking off here? Any help is appreciated. Thanks

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • How do I extract specific data with preg_match?

    - by paulswansea
    I'm looking to extract values from a whole load of html (i've just trimmed down to the relevant data), there are multiple 'select' elements, and only want to extract those who's 'name' element matches the name 'aMembers'. So the resulting values I would like to retrieve are 5,10,25 and 30 (see below) how can I achieve this with preg_match? <DIV id="searchM" class="search"><select name="aMembers" id="aMembers" tabIndex="2"> <option selected="selected" value="">Data 3</option> <option value="5">A name</option> <option value="10">Another name</option> </select> </DIV> <DIV id="searchM" class="search"><select name="bMembers" id="bMembers" tabIndex="2"> <option selected="selected" value="">Data 2</option> <option value="15">A name</option> <option value="20">Another name</option> </select> </DIV> <DIV id="searchM" class="search"><select name="aMembers" id="Members" tabIndex="2"> <option selected="selected" value="">Data 1</option> <option value="25">A name</option> <option value="30">Another name</option> </select> </DIV>

    Read the article

  • How to implement the disposable pattern in a class that inherits from another disposable class?

    - by TheRHCP
    Hi, I often used the disposable pattern in simple classes that referenced small amount of resources, but I never had to implement this pattern on a class that inherits from another disposable class and I am starting to be a bit confused in how to free the whole resources. I start with a little sample code: public class Tracer : IDisposable { bool disposed; FileStream fileStream; public Tracer() { //Some fileStream initialization } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!disposed) { if (disposing) { if (fileStream != null) { fileStream.Dispose(); } } disposed = true; } } } public class ServiceWrapper : Tracer { bool disposed; ServiceHost serviceHost; //Some properties public ServiceWrapper () { //Some serviceHost initialization } //protected override void Dispose(bool disposing) //{ // if (!disposed) // { // if (disposing) // { // if (serviceHost != null) // { // serviceHost.Close(); // } // } // disposed = true; // } //} } My real question is: how to implement the disposable pattern inside my ServiceWrapper class to be sure that when I will dispose an instance of it, it will dispose resources in both inherited and base class? Thanks.

    Read the article

  • I have data about deadlocks, but I can't understand why they occur (MS SQL/ASP.NET MVC)

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second LINQ TO SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • Mysql dropping inserts with triggers

    - by user2891127
    Using mysql 5.5. I have two tables. One has a whitelist of hashes. When I insert a new row into the other table, I want to first compare the hash in the insert statement to the whitelist. If it's in the whitelist, I don't want to do the insert (less data to plow through later). The inserts are generated from another program and are text files with sql statements. I've been playing with triggers, and almost have it working: BEGIN IF (select count(md5hash) from whitelist where md5hash=new.md5hash) 0 THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Already Whitelisted'; END IF; END But there's a problem. The Signal throwing up the error stops the import. I want to skip that line, not stop the whole import. Some searching didn't find any way to silently skip the import. My next idea was to create a duplicate table definition, and redirect the insert to that dup table. But the old and new don't seem to apply to table names. Other then adding an ignore column to my table then doing a mass drop based on that column after the import, is there any way to achieve my goal?

    Read the article

  • Magento - save quote_address on cart addAction using observer

    - by Byron
    I am writing a custom Magento module that gives rate quotes (based on an external API) on the product page. After I get the response on the product page, it adds some of the info from the response into the product view's form. The goal is to save the address (as well as some other things, but those can be in the session for now). The address needs to be saved to the quote, so that the checkout (onestepcheckout, free version) will automatically fill those values in (city, state, zip, country, shipping method [of which there are 3]) and load the rate quote via ajax (which it's already doing). I have gone about this by using an Observer, and watching for the cart events. I fill in the address and save it. When I end up on the cart page or checkout page ~4/5 times the data is lost, and the sql table shows that the quote_address is getting save with no address info, even though there is an explicit save. The observer method's I've used are: checkout_cart_update_item_complete checkout_cart_product_add_after The code saving is: (I've tried this with the address, quote and cart all not calling save() with the same results, as well as not calling setQuote() $quote->getShippingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->getBillingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->save(); $cart->save(); I imagine there's a good chance this is not the best place to save this info, but I am not quite sure. I was wondering if there is a good place to do this without overriding a whole controller to save the address on the add to cart action. Thanks so much, let me know if I need to clarify

    Read the article

  • use jquery inside data returnd by ajax using " innerHtml"

    - by me.again
    hi , I want to use jquery inside data returnd by ajax using " innerHtml" .. look here , <a href=\"#\" onclick=\"$.post('". $url ."', {'t' : 't'}, function(data){ $('content_rows').attr('innerHTML',data);}); " . $this->js_rebind .";return false;\">" $text .'</a>'; this link makes moving between the pages by ajax "by reloading the div that contains the data" , without - of course - reloading whole page . like this : <div id="content_rows"> rows from mysql database </div> now everything is Ok , but , I use " detailsRow Plugin " like this : <script type="text/javascript"> $(document).ready(function() { $('#rows').detailsRow('admin/blog/detailsRow',{ data:{"id":"id"} , dataType: "script" }); }); </script> this plugin makes every TR/Row in the table get more details by click (+-) .. go there : http://webworkflow.co.uk/plugins/detailsRow/ now this plugin works fine in the frist page ( before reload the div by jquery ) but after reloading the div and in the other pages or also when I go back to the frsit page it dos`nt work .. I put the code inside content_rows div like this : <div id="content_rows"> <script type="text/javascript"> $(document).ready(function() { $('#rows').detailsRow('admin/blog/detailsRow',{ data:{"id":"id"} , dataType: "script" }); }); </script> </div> but also doesnt work .. sorry I`m beginner in jQuery .. thanks ..

    Read the article

  • jQuery: Triggering a click on an img not working

    - by Twecs
    I'm testing in Chrome. I have a bunch of 'add item' icons on screen that the user can click in order to add that one item to the database. I also have a button at the botton of that list, which should add the whole list of items. It seems to me that the easiest way to do this is to trigger the 'click' event for all these icons (the reason I'm doing it via the icons is that items-specific values are stored as attributes of the div in which the icon resides). However, I can't get it to work: the event handlers for the individual icons work perfectly, and the event handler for the add-them-all button does give me an alert if I put that in. But if I add the I read some posts that suggest browsers don't allow you to trigger click events, so I added a 'hover' event listener to the icons to see if the problem is in the type of event I want to trigger. Answer: no, same story: the alert will work, but the trigger won't. I have placed the icon event listener in the code before the button event listener. What's going on? Twecs

    Read the article

  • How do I deal with different requests that map to the same response?

    - by daxim
    I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.) There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.) GET /service?mandatory_parameter=some_data HTTP/1.1 GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1 However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching? Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met? Instead of an error, redirect permanently to the canonical form of a request? Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?

    Read the article

  • Using enum values to represent binary operators (or functions)

    - by Bears will eat you
    I'm looking for an elegant way to use values in a Java enum to represent operations or functions. My guess is, since this is Java, there just isn't going to be a nice way to do it, but here goes anyway. My enum looks something like this: public enum Operator { LT, LTEQ, EQEQ, GT, GTEQ, NEQ; ... } where LT means < (less than), LTEQ means <= (less than or equal to), etc - you get the idea. Now I want to actually use these enum values to apply an operator. I know I could do this just using a whole bunch of if-statements, but that's the ugly, OO way, e.g.: int a = ..., b = ...; Operator foo = ...; // one of the enum values if (foo == Operator.LT) { return a < b; } else if (foo == Operator.LTEQ) { return a <= b; } else if ... // etc What I'd like to be able to do is cut out this structure and use some sort of first-class function or even polymorphism, but I'm not really sure how. Something like: int a = ..., b = ...; Operator foo = ...; return foo.apply(a, b); or even int a = ..., b = ...; Operator foo = ...; return a foo.convertToOperator() b; But as far as I've seen, I don't think it's possible to return an operator or function (at least, not without using some 3rd-party library). Any suggestions?

    Read the article

  • How to conduct an interview for a development position remotely?

    - by sharptooth
    Usually we run interviews in office. We have a room with a table, the interviewee and one or two interviewers sit at the table, interviewers ask questions, often accompanied with code snippets on paper, the interviewee (hopefully) answers them, writes code snippets to illustrate his point. Usually it's something like an interviewer writes about five lines of C++ code and asks some specific question - quite a little code. Now we need to do the same remotely. We will be in our office and the interviewee will be far away - we are asked to help hire a person for another office located abroad. Of course we can use some technology for voice calls, but I'm afraid it's the most we can count on. I see a whole set of obstacles here: how to write illustration code snippets and exchange them efficiently? what to do to compensate for the fact that we're not native English speakers and the interviewees might or might be not native English speakers (I'm afraid this can make conversation significantly harder)? Are there any best practices for this situation? How could we address the obstacles listed? What other things should we consider to run the interview most efficiently?

    Read the article

  • Applying iterative algorithm to a set of rows from database

    - by Corvin
    Hello, this question may seem too basic to some, but please bear with be, it's been a while since I dealt with decent database programming. I have an algorithm that I need to program in PHP/MySQL to work on a website. It performs some computations iteratively on an array of objects (it ranks the objects based on their properties). In each iteration the algorithm runs through all collection a couple of times, accessing various data from different places of the whole collection. The algorithm needs several hundred iterations to complete. The array comes from a database. The straightforward solution that I see is to take the results of a database query and create an object for each row of the query, put the objects to an array and pass the array to my algorithm. However, I'm concerned with efficacy of such solution when I have to work with an array of several thousand of items because what I do is essentially mirror the results of a query to memory. On the other hand, making database query a couple of times on each iteration of the algorithm also seems wrong. So, my question is - what is the correct architectural solution for a problem like this? Is it OK to mirror the query results to memory? If not, which is the best way to work with query results in such an algorithm? Thanks!

    Read the article

  • need help with some basic java.

    - by Racket
    Hi, I'm doing the first chapter exercises on my Java book and I have been stuck for a problem for a while now. I'll print the question, prompt/read a double value representing a monetary amount. Then determine the fewest number of each bill and coin needed to represent that amount, starting with the highest (assume that a ten dollar bill is the maximum size needed). For example, if the value entered is 47,63 (forty-seven dollars and sixty-three cents), and the program should print the equivalent amount as: 4 ten dollar bills 1 five dollar bills 2 one dollar bills 2 quarters 1 dimes 0 nickels 3 pennies" etc. I'm doing an example exactly as they said in order to get an idea, as you will see in the code. Nevertheless, I managed to print 4 dollars, and I can't figure out how to get "1 five dollar", only 7 dollars (see code). Please, don't do the whole code for me. I just need some advice in regards to what I said. Thank you. import java.util.Scanner; public class PP29 { public static void main (String[] args) { Scanner sc = new Scanner (System.in); int amount; double value; double test1; double quarter; System.out.println("Enter \"double\" value: "); value = sc.nextDouble(); amount = (int) value / 10; // 47,63 / 10 = 4. int amount2 = (int) value % 10; // 47 - 40 = 7 quarter = value * 100; // 47,63 * 100 = 4736 int sum = (int) quarter % 100; // 4763 / 100 => 4763-4700 = 63. System.out.println(amount); System.out.println(amount2); } }

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Pointers to a typed variable in C#: Interfase, Generic, object or Class? (Boxing/Unboxing)

    - by PaulG
    First of all, I apologize if this has been asked a thousand times. I read my C# book, I googled it, but I can't seem to find the answer I am looking for, or I am missing the point big time. I am very confused with the whole boxing/unboxing issue. Say I have fields of different classes, all returning typed variables (e.g. 'double') and I would like to have a variable point to any of these fields. In plain old C I would do something like: double * newVar; newVar = &oldVar; newVar = &anotherVar; ... In C#, it seems I could do an interfase, but would require that all fields be properties and named the same. Breaks apart when one of the properties doesn't have the same name or is not a property. I could also create a generic class returning double, but seems a bit absurd to create a class to represent a 'double', when a 'double' class already exists. If I am not mistaken, it doesn't even need to be generic, could be a simple class returning double. I could create an object and box the typed variable to the newly created object, but then I would have to cast every time I use it. Of course, I always have the unsafe option... but afraid of getting to unknown memory space, divide by zero and bring an end to this world. None of these seem to be the same as the old simple 'double * variable'. Am I missing something here?

    Read the article

  • I need to move a dived CSS module into the center of the page...

    - by JackMcE
    I'm building a website for someone and they wanted the text and bulk of the information to be centered on the page. Problem is I can get everything contained in a tag and then assign the class, but I can't get the whole thing to center. It always hangs to the left even if I apply centering to the div class. I guess you could say that it is stuck on the left side of the page when I want everything to be centered. I would just make everything format larger but they want some space left in the background for the color and maybe some imagery later on. They haven't made up their minds. If you want to take a look here is the link where I'm building or testing stuff. I know the header and such needs to be re proportioned to fit with everything, but just as a frame of reference. Don't worry about the header, just know that I want the white text information area with the purple border to stay the same size, but just move to the center and if some one could tell me how to do that I would appreciate it greatly.

    Read the article

  • Getting array of Values of Textboxes with the same class

    - by nCdy
    I setup custom CSS class for array of dynamic TextBoxes (inputs as HTML) so... now I need to get array of it : <input type="text" style="width: 50px;" class="DynamicTB" id="ctl00_ContentPlaceHolder1_GridView1_ctl02_id" readonly="readonly" value="1" name="ctl00$ContentPlaceHolder1$GridView1$ctl02$id"> sure client don't really knows the count of inputs. That's why I use class and here is what I'm trying to make : $.each( { id : $("input.DynamicTB").css("value") }, function(id){ CallPageMethod("SelectBook", success, fail, "id",id); }); I'm not sure if this $("input.DynamicTB").css("value") will works correct :( but How can I transfer whole array of values to SelectBook Method ? My javascript is bad and my debugger don't show me javascript errors but it just doesn't works because of something wrong with each. And ... finally I just need to get array of values of dynamic textboxes and transfer them to server side ... mt [WebMethode] can't see server side :( [Web.Services.WebMethod] public static SelectBook(id : array) : string { id } And sure I have no idea how can I use jQuery .live and binding here.

    Read the article

  • Doing a global count of an object type (like Users), best practice?

    - by user246114
    Hi, I know keeping global counters is frowned upon in app engine. I am interested in getting some stats though, like once every 24 hours. For example, I'd like to count the number of User objects in the system once every 24 hours. So how do we do this? Do we simply keep a set of admin tool functions which do something like: SELECT FROM com.me.project.server.User; and just see what the size of the returned List is? This is kind of a bummer because the datastore would have to deserialize every single User instance to create the returned list, right? I could optimize this possibly by asking for only the keys to be returned, so the whole User object doesn't have to be deserialized. Then again, a global counter for # of users probably would create too much contention, because there probably won't be hundreds of signups a minute for the service I'm creating. How should we go about doing this? Getting my total number of users once a day is probably a pretty typical operation? Thank you

    Read the article

  • searching array of words faster

    - by Martijn
    hi eveybody i want to look how much an array comes in a database. Its pretty slow and i want to know if there's a way of searching like multiple words or an whole array without a for loop.. i'm struggeling for a while now. here's my code $dateBegin = "2010-12-07 15:54:24.0"; $dateEnd = "2010-12-30 18:19:52.0"; $textPerson = " text text text text text text text text text text text text text text "; $textPersonExplode = explode(" ", $textPerson ); $db = dbConnect(); for ( $counter = 0;$counter <= sizeof($textPersonExplode)-1 ; $counter++) { $query = "SELECT count(word) FROM `news_google_split` WHERE `word` LIKE '$textPersonExplode[$counter]' AND `date` >= '$dateBegin' AND `date` <= '$dateEnd'"; $result = mysql_query($query) or die(mysql_error()); while($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $word[] = $textPersonExplode[$counter]; $count[] = $row[0]; } if (!$result) { die('Invalid query: ' . mysql_error()); } } thanks for the help.

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >