Search Results

Search found 501 results on 21 pages for 'sequential'.

Page 15/21 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

  • How can I see which shared folders my program has access to?

    - by Kasper Hansen
    My program needs to read and write to folders on other machines that might be in another domain. So I used the System.Runtime.InteropServices to add shared folders. This worked fine when it was hard coded in the main menu of my windows service. But since then something went wrong and I don't know if it is a coding error or configuration error. What is the scope of a shared folder? If a thread in my program adds a shared folder, can the entire local machine see it? Is there a way to view what shared folders has been added? Or is there a way to see when a folder is added? [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern System.UInt32 NetUseAdd(string UncServerName, int Level, ref USE_INFO_2 Buf, out uint ParmError); [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal struct USE_INFO_2 { internal LPWSTR ui2_local; internal LPWSTR ui2_remote; internal LPWSTR ui2_password; internal DWORD ui2_status; internal DWORD ui2_asg_type; internal DWORD ui2_refcount; internal DWORD ui2_usecount; internal LPWSTR ui2_username; internal LPWSTR ui2_domainname; } private void AddSharedFolder(string name, string domain, string username, string password) { if (name == null || domain == null || username == null || password == null) return; USE_INFO_2 useInfo = new USE_INFO_2(); useInfo.ui2_remote = name; useInfo.ui2_password = password; useInfo.ui2_asg_type = 0; //disk drive useInfo.ui2_usecount = 1; useInfo.ui2_username = username; useInfo.ui2_domainname = domain; uint paramErrorIndex; uint returnCode = NetUseAdd(String.Empty, 2, ref useInfo, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } }

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • How can I get the rank of rows relative to total number of rows based on a field?

    - by Arms
    I have a scores table that has two fields: user_id score I'm fetching specific rows that match a list of user_id's. How can I determine a rank for each row relative to the total number of rows, based on score? The rows in the result set are not necessarily sequential (the scores will vary widely from one row to the next). I'm not sure if this matters, but user_id is a unique field. Edit @Greelmo I'm already ordering the rows. If I fetch 15 rows, I don't want the rank to be 1-15. I need it to be the position of that row compared against the entire table by the score property. So if I have 200 rows, one row's rank may be 3 and another may be 179 (these are arbitrary #'s for example only). Edit 2 I'm having some luck with this query, but I actually want to avoid ties SELECT s.score , s.created_at , u.name , u.location , u.icon_id , u.photo , (SELECT COUNT(*) + 1 FROM scores WHERE score > s.score) AS rank FROM scores s LEFT JOIN users u ON u.uID = s.user_id ORDER BY s.score DESC , s.created_at DESC LIMIT 15 If two or more rows have the same score, I want the latest one (or earliest - I don't care) to be ranked higher. I tried modifying the subquery with AND id > s.id but that ended up giving me an unexpected result set and different ties.

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Interchange structured data between Haskell and C

    - by Eonil
    First, I'm a Haskell beginner. I'm planning integrating Haskell into C for realtime game. Haskell does logic, C does rendering. To do this, I have to pass huge complexly structured data (game state) from/to each other for each tick (at least 30 times per second). So the passing data should be lightweight. This state data may laid on sequential space on memory. Both of Haskell and C parts should access every area of the states freely. In best case, the cost of passing data can be copying a pointer to a memory. In worst case, copying whole data with conversion. I'm reading Haskell's FFI(http://www.haskell.org/haskellwiki/FFICookBook#Working_with_structs) The Haskell code look specifying memory layout explicitly. I have a few questions. Can Haskell specify memory layout explicitly? (to be matched exactly with C struct) Is this real memory layout? Or any kind of conversion required? (performance penalty) If Q#2 is true, Any performance penalty when the memory layout specified explicitly? What's the syntax #{alignment foo}? Where can I find the document about this? If I want to pass huge data with best performance, how should I do that? *PS Explicit memory layout feature which I said is just C#'s [StructLayout] attribute. Which is specifying in-memory position and size explicitly. http://www.developerfusion.com/article/84519/mastering-structs-in-c/ I'm not sure Haskell has matching linguistic construct matching with fields of C struct.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • How to parallelize this groovy code?

    - by lucas
    I'm trying to write a reusable component in Groovy to easily shoot off emails from some of our Java applications. I would like to pass it a List, where Email is just a POJO(POGO?) with some email info. I'd like it to be multithreaded, at least running all the email logic in a second thread, or make one thread per email. I am really foggy on multithreading in Java so that probably doesn't help! I've attempted a few different ways, but here is what I have right now: void sendEmails(List<Email> emails) { def threads = [] def sendEm = emails.each{ email -> def th = new Thread({ Random rand = new Random() def wait = (long)(rand.nextDouble() * 1000) println "in closure" this.sleep wait sendEmail(email) }) println "putting thread in list" threads << th } threads.each { it.run() } threads.each { it.join() } } I was hoping the sleep would randomly slow some threads down so the console output wouldn't be sequential. Instead, I see this: putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list in closure sending email1 in closure sending email2 in closure sending email3 in closure sending email4 in closure sending email5 in closure sending email6 in closure sending email7 in closure sending email8 in closure sending email9 in closure sending email10 sendEmail basically does what you'd expect, including the println statement, and the client that calls this follows, void doSomething() { Mailman emailer = MailmanFactory.getExchangeEmailer() def to = ["one","two"] def from = "noreply" def li = [] def email (1..10).each { email = new Email(to,null,from,"email"+it,"hello") li << email } emailer.sendEmails li }

    Read the article

  • Error_No_Token by adding printer driver (c#)

    - by user1686388
    I'm trying to add printer drivers using the windows API function AddPrinterDriver. The Win32 error 1008 (An attempt was made to reference a token that does not exist.) was always generated. My code is shown as following [DllImport("Winspool.drv")] static extern bool AddPrinterDriver(string Name, Int32 Level, [in] ref DRIVER_INFO_3 DriverInfo); [StructLayout(LayoutKind.Sequential)] public struct DRIVER_INFO_3 { public Int32 cVersion; public string Name; public string Environment; public string DriverPath; public string DataFile; public string ConfigFile; public string HelpFile; public string DependentFiles; public string MonitorName; public string DefaultDataType; } //....................... DRIVER_INFO_3 di = new DRIVER_INFO_3(); //...................... AddPrinterDriver(Environment.MachineName, 3, ref di); I have also tried to get a token by "ImpersonateSelf" before adding the printer driver. But the error 1008 insists. Does anyone have an idea? best regards.

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • Finding a list of indices from master array using secondary array with non-unique entries

    - by fideli
    I have a master array of length n of id numbers that apply to other analogous arrays with corresponding data for elements in my simulation that belong to those id numbers (e.g. data[id]). Were I to generate a list of id numbers of length m separately and need the information in the data array for those ids, what is the best method of getting a list of indices idx of the original array of ids in order to extract data[idx]? That is, given: a=numpy.array([1,3,4,5,6]) # master array b=numpy.array([3,4,3,6,4,1,5]) # secondary array I would like to generate idx=numpy.array([1,2,1,4,2,0,3]) The array a is typically in sequential order but it's not a requirement. Also, array b will most definitely have repeats and will not be in any order. My current method of doing this is: idx=numpy.array([numpy.where(a==bi)[0][0] for bi in b]) I timed it using the following test: a=(numpy.random.uniform(100,size=100)).astype('int') b=numpy.repeat(a,100) timeit method1(a,b) 10 loops, best of 3: 53.1 ms per loop Is there a better way of doing this?

    Read the article

  • Calling from C# to C function which accept a struct array allocated by caller

    - by lifey
    I have the following C struct struct XYZ { void *a; char fn[MAX_FN]; unsigned long l; unsigned long o; }; And I want to call the following function from C#: extern "C" int func(int handle, int *numEntries, XYZ *xyzTbl); Where xyzTbl is an array of XYZ of size numEntires which is allocated by the caller I have defined the following C# struct: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Ansi)] public struct XYZ { public System.IntPtr rva; [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst = 128)] public string fn; public uint l; public uint o; } and a method: [System.Runtime.InteropServices.DllImport(@"xyzdll.dll", CallingConvention = CallingConvention.Cdecl)] public static extern Int32 func(Int32 handle, ref Int32 numntries, [MarshalAs(UnmanagedType.LPArray)] XYZ[] arr); Then I try to call the function : XYZ xyz = new XYZ[numEntries]; for (...) xyz[i] = new XYZ(); func(handle,numEntries,xyz); Of course it does not work. Can someone shed light on what I am doing wrong ?

    Read the article

  • Is this an acceptable use of "ASCII arithmetic"?

    - by jmgant
    I've got a string value of the form 10123X123456 where 10 is the year, 123 is the day number within the year, and the rest is unique system-generated stuff. Under certain circumstances, I need to add 400 to the day number, so that the number above, for example, would become 10523X123456. My first idea was to substring those three characters, convert them to an integer, add 400 to it, convert them back to a string and then call replace on the original string. That works. But then it occurred to me that the only character I actually need to change is the third one, and that the original value would always be 0-3, so there would never be any "carrying" problems. It further occurred to me that the ASCII code points for the numbers are consecutive, so adding the number 4 to the character "0", for example, would result in "4", and so forth. So that's what I ended up doing. My question is, is there any reason that won't always work? I generally avoid "ASCII arithmetic" on the grounds that it's not cross-platform or internationalization friendly. But it seems reasonable to assume that the code points for numbers will always be sequential, i.e., "4" will always be 1 more than "3". Anybody see any problem with this reasoning? Here's the code. string input = "10123X123456"; input[2] += 4; //Output should be 10523X123456

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • directory resource does not create directory

    - by Dan Tenenbaum
    I have a Vagrantfile that provisions a VM by running a chef recipe. The first resource in the chef recipe is: directory "/downloads" do owner "root" group "root" mode "0755" action :create end # check that it worked: raise "/downloads doesn't exist!" unless File.exists? "/downloads" When I run this at work, it works fine. When I run it at home, it fails, the exception is raised when I check to see if /downloads exists. I'm not sure why this is happening. I would expect it to behave identically, since the underlying Vagrant box is the same both at work and at home. I am a chef newb so perhaps there is something I am not understanding about the order in which the resources are run within my recipe? I would expect them to run in sequential order... I also tried putting a notifies call inside the directory block, where I call another execute block :immediately. That works, but inside the second execute block I test to see whether /downloads has been created and it hasn't. Clearly I'm missing something very basic.

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • How to define 2-bit numbers in C, if possible?

    - by Eddy
    For my university process I'm simulating a process called random sequential adsorption. One of the things I have to do involves randomly depositing squares (which cannot overlap) onto a lattice until there is no more room left, repeating the process several times in order to find the average 'jamming' coverage %. Basically I'm performing operations on a large array of integers, of which 3 possible values exist: 0, 1 and 2. The sites marked with '0' are empty, the sites marked with '1' are full. Initially the array is defined like this: int i, j; int n = 1000000000; int array[n][n]; for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } Say I want to deposit 5*5 squares randomly on the array (that cannot overlap), so that the squares are represented by '1's. This would be done by choosing the x and y coordinates randomly and then creating a 5*5 square of '1's with the topleft point of the square starting at that point. I would then mark sites near the square as '2's. These represent the sites that are unavailable since depositing a square at those sites would cause it to overlap an existing square. This process would continue until there is no more room left to deposit squares on the array (basically, no more '0's left on the array) Anyway, to the point. I would like to make this process as efficient as possible, by using bitwise operations. This would be easy if I didn't have to mark sites near the squares. I was wondering whether creating a 2-bit number would be possible, so that I can account for the sites marked with '2'. Sorry if this sounds really complicated, I just wanted to explain why I want to do this.

    Read the article

  • Highlighting a piechart slice from an HTML element (mouseover)

    - by nickhar
    I have a series of HTML table cells with data - an example of which is: <tr id="rrow1"> <td> <a href="/electricity" class="category">Electricity</a> </td> <td> 901.471 </td> </tr> <tr id="rrow2">... <tr id="rrow3">... etc In this case, each <tr> (or hypathetically for the wider community a div/span/tr/td) is assigned a sequential id based on $rrow++; in a while loop (in PHP). I also have a Piechart using the highcharts library, where i'd like to highlight the slice (sliced: true) based upon onmouseover of particular div/span/tr/td element - in this case #rrow1 as above, but multiple/iterative elements as required and (sliced: false) onmouseout... As a simple example, I've tried accessing various derivatives of the following, but failed: $('#rrow1').mouseover(function() { chart.series[0].graph.attr('sliced', true); }); $('#rrow1').mouseout(function() { chart.series[0].graph.attr('sliced', false); }); The nearest I've found is this but bastardised at most and without success: plotOptions: { series: { mouseOver: function() { if( $('#rrow1').mouseover ) series.x = sliced: true; }, mouseOut: function() { if( $('#rrow1').mouseout ) series.x = sliced: false; } } } These are far from approaching correct and despite searching I can't find a valid/helpful example to work from or draw direction. You can view the pie chart in question on jsfiddle here.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Stopping a MATLAB GUI callback

    - by leonhart88
    Dear All, I have a START and STOP button. When I hit START, i run a bunch of code in my callback. It's basically a sequential "script" that opens valves, dispenses water and then closes the valves...there is no while() loop and it doesn't repeat. I want to be able to stop this process at any time using the STOP button. Most of the related answers I've seen are in the cases where a while() loop is used. Some people have also suggested to periodically check if the STOP button was pressed (using a variable or handle variable). Since I do not have a while loop, I can't solve it that way. Also, I'd like to be able to exit immediately, without having to periodically check (because checking multiple times in my code would be ugly and confusing). Is there a way to terminate the callback which was interrupted by the STOP button? If not, is it possible to have the START button run a .m file and then have the STOP button terminate that .m file? The worst case scenario would be to check a variable periodically. UPDATE: Well, looks like the worst case scenario is what is suggested by MATLAB... http://www.mathworks.com/support/solutions/en/data/1-33IK85/index.html?product=ML&solution=1-33IK85 Thanks.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >