Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 124/211 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Multiple Progress Bar States in WPF

    - by kbo206
    I'm currently developing a WPF application in C# and I want to have a progress bar control that can be in a "paused" state as well as an "error" state. Much like this: http://wyday.com/windows-7-progress-bar/ Unfortunately, that's a Windows Forms control and implementing it via a Windows Forms Host proved to be incompatible. My question is, how can I go about accomplishing a similar effect in WPF? Is it possible to make multiple "states" of a progress bar? Is this kind of operation in WPF but I'm just looking over it? I'm mainly talking about the progress bar itself here, I'm pretty sure I know how to achieve this in the taskbar. All help is appreciated, especially code examples. Thanks!

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • Does deleting a branch in git remove it from the history?

    - by Ken Liu
    Coming from svn, just starting to become familiar with git. When a branch is deleted in git, is it removed from the history? In svn, you can easily recover a branch by reverting the delete operation (reverse merge). Like all deletes in svn, the branch is never really deleted, it's just removed from the current tree. If the branch is actually deleted from the history in git, what happens to the changes that were merged from that branch? Are they retained?

    Read the article

  • How to use a CTabCtrl in a MFC dialog based application ?

    - by shan23
    I need to do something which i expected to be was simple - create a tab control which has 2 tabs, implying 2 modes of operation for my app. When user clicks on Tab1, he'll be presented with some buttons and textboxes, and when he clicks Tab2, some other input method. I noticed that there was a CTabCtrl class thats used in MFC to add tabs. However, once I added the tab ctrl using the UI designer, I couldn't specify how many tabs there'll be using property window. Searching on the net, I found some examples but all of them required you to derive from CtabCtrl , create 2 or more child dialogs etc and to write your own custom class. My question is, since I want to do something so basic, why couldn't I do it using the familiar Add Event handler/Add member variable wizard and then handle everything else inside my app's class ? Surely, the default CTabCtrl class can do something useful without needing to derive from it ?

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • Implement dictionary using java

    - by ahmad
    Task Dictionary ADT The dictionary ADT models a searchable collection of key-element entries Multiple items with the same key are allowed Applications: word-definition pairs Dictionary ADT methods: find(k): if the dictionary has an entry with key k, returns it, else, returns null findAll(k): returns an iterator of all entries with key k insert(k, o): inserts and returns the entry (k, o) remove(e): remove the entry e from the dictionary size(), isEmpty() Operation Output Dictionary insert(5,A) (5,A) (5,A) insert(7,B) (7,B) (5,A),(7,B) insert(2,C) (2,C) (5,A),(7,B),(2,C) insert(8,D) (8,D) (5,A),(7,B),(2,C),(8,D) insert(2,E) (2,E) (5,A),(7,B),(2,C),(8,D),(2,E) find(7) (7,B) (5,A),(7,B),(2,C),(8,D),(2,E) find(4) null (5,A),(7,B),(2,C),(8,D),(2,E) find(2) (2,C) (5,A),(7,B),(2,C),(8,D),(2,E) findAll(2) (2,C),(2,E) (5,A),(7,B),(2,C),(8,D),(2,E) size() 5 (5,A),(7,B),(2,C),(8,D),(2,E) remove(find(5)) (5,A) (7,B),(2,C),(8,D),(2,E) find(5) null (7,B),(2,C),(8,D),(2,E) Detailed explanation: NO Specific requirements: please Get it done i need HELP !!!

    Read the article

  • Is there a built-in way to determine the size of a WCF response?

    - by jaminto
    Before a client gets the full payload of the web request, we'd like to first send it a measurement of the size of the response it will get. If the response will be too large, the client will present a message to the user giving them the option to abort the operation. We can write some custom code to preload the response on the server, determine the size, and then pass it on to the client, but we'd rather not if there's another way to do it. Does anyone know if WCF has any tricky way to do this? Or are there any free third party tools out there that will accomplish this? Thanks.

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • Linux and I/O completion ports?

    - by someguy
    Using winsock, you can configure sockets or seperate I/O operations to "overlap". This means that calls to perform I/O are returned immediately, while the actual operations are completed asynchronously by separate worker threads. Winsock also provides "completion ports". From what I understand, a completion port acts as a multiplexer of handles (sockets). A handle can be demultiplexed if it isn't in the middle of an I/O operation, i.e. if all its I/O operations are completed. So, on to my question... does linux support completion ports or even asynchronous I/O for sockets?

    Read the article

  • Binding a Java Integer to JavaScriptEngine doesn't work

    - by rplantiko
    To see how binding Java objects to symbols in a dynamic language works, I wrote the following spike test, binding a java.lang.Integer to the symbol i to be changed in JavaScript: @Test public void bindToLocalVariable() throws ScriptException { javax.script.ScriptEngineManager sem = new javax.script.ScriptEngineManager(); javax.script.ScriptEngine engine = sem.getEngineByName("JavaScript"); Integer i = new Integer(17); engine.put( "i", i ); engine.eval( "i++;" ); // Now execute JavaScript assertEquals( 18, i.intValue() ); } Unfortunately, I get a failure. java.lang.AssertionError: expected:<18> but was:<17> JavaScript knows the symbol i (otherwise it would have thrown a ScriptException which is not the case), but the increment operation i++ is not performed on the original Integer object. Any explanations?

    Read the article

  • Inconsistent MySQL COLLATE errors across databases

    - by Teflon Ted
    I have two physically-separate MySQL databases on which I have to run a single query. The query has a section of SQL that looks like this: and foo_table.bar_column like concat('%', rules.pattern, '%') COLLATE utf8_general_ci It runs fine on database A but on database B I get this error: ERROR 1253 (42000): COLLATION 'utf8_general_ci' is not valid for CHARACTER SET 'latin1' If I remove the collation it runs fine on database B but on database A I get this error: ERROR 1267 (HY000): Illegal mix of collations (utf8_general_ci,IMPLICIT) and (utf8_unicode_ci,IMPLICIT) for operation 'like' Is there a version of the query that will run on both databases? Or, is there a configuration I can change on either database to make the query happy in both places? Update: Database A is version 5.1.38, Database B is version 5.1.34

    Read the article

  • LINQ to SQL: making a "double IN" query crashes

    - by Alex
    I need to do the following thing: var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) && (from t2 in DB.Table2 where t2.Date >= DataTime.Now select t2.ID).Contains(c.ID) select a It doesn't want to run. I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. But when I try to run var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) select a OR var a = from c in DB.Customers where (from t2 in DB.Table2 where t2.Date = DataTime.Now select t2.ID).Contains(c.ID) select a It works! I'm sure that there both IN queries contain some customers ids.

    Read the article

  • C# socket blocking behavior

    - by Gearoid Murphy
    My situation is this : I have a C# tcp socket through which I receive structured messages consisting of a 3 byte header and a variable size payload. The tcp data is routed through a network of tunnels and is occasionally susceptible to fragmentation. The solution to this is to perform a blocking read of 3 bytes for the header and a blocking read of N bytes for the variable size payload (the value of N is in the header). The problem I'm experiencing is that occasionally, the blocking receive operation returns a partial packet. That is, it reads a volume of bytes less than the number I explicitly set in the receive call. After some debugging, it appears that the number of bytes it returns is equal to the number of bytes in the Available property of the socket before the receive op. This behavior is contrary to my expectation. If the socket is blocking and I explicitly set the number of bytes to receive, shouldn't the socket block until it recv's those bytes?, any help, pointers, etc would be much appreciated.

    Read the article

  • Ramifications of CheckForIllegalCrossThreadCalls=false

    - by Ron Skufca
    I recently updated an application from VS2003 to VS2008 and I knew I would be dealing with a host of "Cross-thread operation not valid: Control 'myControl' accessed from a thread other than the thread it was created on" I am handling this in what I beleive is the correct way (see code sample below). I am running into numerous controls that are going to need a similar fix. Not wanting to have similar code for every label, textbox etc.. that are being accessed by a non UI thread. What are the ramifications of just setting the CheckForIllegalCrossThreadCalls = false for the entire app? I found a CodeProject article with various workarounds and a warning at the bottom to NOT set the property. I am looking for other opinions/experiences on this issue. private void ShowStatus(string szStatus) { try { if (this.statusBar1.InvokeRequired) { BeginInvoke(new MethodInvoker(delegate() { ShowStatus(szStatus); })); } else { statusBar1.Panels[0].Text = szStatus; } } catch (Exception ex) { LogStatus.WriteErrorLog(ex, "Error", "frmMNI.ShowStatus()"); } }

    Read the article

  • Using Apache Velocity with StringBuilders/CharSequences

    - by mindas
    We are using Apache Velocity for dynamic templates. At the moment Velocity has following methods for evaluation/replacing: public static boolean evaluate(Context context, Writer writer, String logTag, Reader reader) public static boolean evaluate(Context context, Writer out, String logTag, String instring) We use these methods by providing StringWriter to write evaluation results. Our incoming data is coming in StringBuilder format so we use StringBuilder.toString and feed it as instring. The problem is that our templates are fairly large (can be megabytes, tens of Ms on rare cases), replacements occur very frequently and each replacement operation triples the amount of required memory (incoming data + StringBuilder.toString() which creates a new copy + outgoing data). I was wondering if there is a way to improve this. E.g. if I could find a way to provide a Reader and Writer on top of same StringBuilder instance that only uses extra memory for in/out differences, would that be a good approach? Has anybody done anything similar and could share any source for such a class? Or maybe there any better solutions to given problem?

    Read the article

  • How to create an object reference to a xaml page from App.xaml.cs codebehind?

    - by John K.
    Hi all, I have a Silverlight 4 Business Project where I have enabled the ASP.NET Authentication/Authorization role information. I would like to pass the currently authenticated user's account information from the app.xaml.cs codebehind to a different XAML page, but I have no idea how that is done, or if it's even possible. My goal is to databind the IsEnabled property of various buttons of my target XAML page, based on whether the current user is in a particular admin related role or not. The Application_UserLoaded event handler of app.xaml.cs seems to be the safest event handler to initiate this task because it fires only after the user's account information is loaded from the server. I had previously attempted to retrieve the current user information directly from my target XAML page, but I was never getting the current user information because Application_UserLoaded hadn't finished loading the current user info yet. public partial class App : Application { private void Application_UserLoaded(LoadUserOperation operation) { // How do you create an object reference to a XAML page from your project solution // from this event handler? } } Thanks in advance for any assistance, John

    Read the article

  • Is there an easy way to sign a C++ CLI assembly in VS 2010?

    - by jyoung
    Right now I am setting the Linker/Advanced/KeyFile option. I am getting the "mt.exe : general warning 810100b3: is a strong-name signed assembly and embedding a manifest invalidates the signature. You will need to re-sign this file to make it a valid assembly.". Reading from the web, it sounds like I have to set the delay signing option, download the SDK, and run sn.exe as a post build event. Surely there must be an easier way to do this common operation in VS2010?

    Read the article

  • SQL Server Command Timeouts Timeout expired. The timeout period elapsed prior to completion of the o

    - by user104295
    Hi there, we've had a ASP.NET web application deployed for a number of years now.... last week we migrated to a slightly slower server to save some money. Now we're frequently getting command timeouts: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding This is understandable in that a slower server will take longer to produce results; take longer time and produce a timeout. Is there any system-wide way to get SqlClient to set a longer timeout? We cannot change the code, as it's everywhere... we're using multiple data access technologies as well. Maybe there's a default command timeout setting on a connection string? We just need to increase it by 30 seconds; we're happy to wait a bit longer for queries to return. thanks

    Read the article

  • SQL vs MySQL: Rules about aggregate operations and GROUP BY

    - by Phazyck
    In this book I'm currently reading while following a course on databases, the following example of an illegal query using an aggregate operator is given: Find the name and age of the oldest sailor. Consider the following attempt to answer this query: SELECT S.name, S.age FROM Sailors.S The intent is for this query to return not only the maximum age but also the name of the sailors having that age. However, this query is illegal in SQL--if the SELECT clause uses an aggregate operation, then it must use only aggregate operations unless the query contains a GROUP BY clause! Some time later while doing an exercise using MySQL, I faced a similar problem, and made a mistake similar to the one mentioned. However, MySQL didn't complain and just spit out some tables which later turned out not be what I needed. Is the query above really illegal in SQL, but legal in MySQL, and if so, why is that? In what situation would one need to make such a query?

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • SQL Server 2008 log size management problems

    - by b0x0rz
    I'm trying to shrink the log of a database AND set the recovery to simple, but always there is an error, whatever i try. USE 4_o5; GO ALTER DATABASE 4_o5 SET RECOVERY SIMPLE; GO DBCC SHRINKFILE (4_o5_log, 10); GO the output of sp_helpfile says that log file is located under (hosted solution): I:\dataroot\4_o5_log.LDF please help me perform this operation as the log file got large when importing a lot of data and now this info is no longer needed, have multiple (lots of) backups since then. the exact error message when performing the query above is: incorrect syntax near '4'. RECOVERY is not a recognized SET option. incorrect syntax near _5_log'. i am using visual studio 2010 (also have SQL Server Express installed locally, SQL Server 2008 proper installed at provider (shared)) thnx a lot

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • Exposing some live data on a website - New to ASP.NET, need guidelines

    - by Carlos
    I have a large .NET based system running within the company intranet, which allows winforms users to see some live calculated numbers and a custom winforms control drawn live (but not real-time). The Forms users can also affect the operation of the system. I would like to just show the live numbers on a website, along with the custom control. Nothing needs to come back from the web user, as the web app is meant to be just for monitoring. All the numbers can be calculated at the server. It's been a long time since I touched ASP.NET, and I need to know how to proceed. What are the steps in building and deploying such a website? Any caveats I need to look out for?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >