Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 52/67 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Bioperl, equivalent of IO::ScalarArray for array of Seq objects?

    - by Ryan Thompson
    In perl, we have IO::ScalarArray for treating the elements of an array like the lines of a file. In BioPerl, we have Bio::SeqIO, which can produce a filehandle that reads and writes Bio::Seq objects instead of strings representing lines of text. I would like to do a combination of the two: I would like to obtain a handle that reads successive Bio::Seq objects from an array of such objects. Is there any way to do this? Would it be trivial for me to implement a module that does this? My reason for wanting this is that I would like to be able to write a subroutine that accepts either a Bio::SeqIO handle or an array of Bio::Seq objects, and I'd like to avoid writing separate loops based on what kind of input I get. Perhaps the following would be better than writing my own IO module? sub process_sequences { my $input = $_[0]; # read either from array of Bio::Seq or from Bio::SeqIO my $nextseq; if (ref $input eq 'ARRAY') { my $pos = 0 $nextseq = sub { return $input->[$pos++] if $pos < @$input}; } } else { $nextseq = sub { $input->getline(); } } while (my $seq = $nextseq->()) { do_cool_stuff_with($seq) } }

    Read the article

  • ASP.NET MVC static-asset aides/practices

    - by shannon
    I want to keep assets that are only used by one view in a view-specific folder, so my Search.aspx properly finds images/*.jpg, and helps me maintain my convention: ~/Areas/Candidate/Views/Job/Search.aspx -> ~/Assets/Candidate/Job/Search/images/*.jpg Perhaps with the ability to easily reference controller- or area-common assets manually or automatically: ~/Assets/Candidate/Job/images/*.jpg ~/Assets/Candidate/images/*.jpg If you wonder why I'm doing this, then speak up; I'm probably missing something. But here's why: I don't want stale static assets sitting in my ASP.NET MVC projects, which I expect to be an automatic outcome of the ~/Assets/Images folder: i.e. As a shared asset loses its last reference-count, who knows to delete it, especially with it being so difficult to trace content link validity in MVC projects? How do you, personally, do this? I can imagine, for example: Implement HtmlHelper extension methods for URL-generation. Extending ViewPage and ViewMasterPage with URL-generation methods. Implementing an inbound request filter to search related folders for static assets. and, are there good libraries out there for this? For example, something that also automatically appends timestamps for .JS and .CSS files, writes the / tags for me, and maybe even that allows me to inject includes in the head section from outside head code?

    Read the article

  • Can I get a reference to a pending transaction from a SqlConnection object?

    - by Rune
    Hey, Suppose someone (other than me) writes the following code and compiles it into an assembly: using (SqlConnection conn = new SqlConnection(connString)) { conn.Open(); using (var transaction = conn.BeginTransaction()) { /* Update something in the database */ /* Then call any registered OnUpdate handlers */ InvokeOnUpdate(conn); transaction.Commit(); } } The call to InvokeOnUpdate(IDbConnection conn) calls out to an event handler that I can implement and register. Thus, in this handler I will have a reference to the IDbConnection object, but I won't have a reference to the pending transaction. Is there any way in which I can get a hold of the transaction? In my OnUpdate handler I want to execute something similar to the following: private void MyOnUpdateHandler(IDbConnection conn) { var cmd = conn.CreateCommand(); cmd.CommandText = someSQLString; cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } However, the call to cmd.ExecuteNonQuery() throws an InvalidOperationException complaining that "ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized". Can I in any way enlist my SqlCommand cmd with the pending transaction? Can I retrieve a reference to the pending transaction from the IDbConnection object (I'd be happy to use reflection if necessary)?

    Read the article

  • How do I read binary C++ protobuf data using Python protobuf?

    - by nbolton
    The Python version of Google protobuf gives us only: SerializeAsString() Where as the C++ version gives us both: SerializeToArray(...) SerializeAsString() We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string? Is this the correct way of doing it? binary = get_binary_data() binary_size = get_binary_size() string = None for i in range(len(binary_size)): string += i message = new MyMessage() message.ParseFromString(string) Update: Here's a new example, and a problem: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data if not eof: foo_bar = FooBar() foo_bar.ParseFromString(data) When we get to the foo_bar.ParseFromString(data) line, I get this error: Exception Type: DecodeError Exception Value: Too many bytes when decoding varint. Update 2: It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding). This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data string = '' for i in range(0, len(data)): byte = data[i] if byte != '\xcc': # yuck! string += data[i] if not eof: foo_bar = FooBar() foo_bar.ParseFromString(string) There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.

    Read the article

  • What is a Windows scripting language that: does not rely on .NET and offers the most OOP support and

    - by jJack
    What is a Windows scripting language that: does not rely on .NET and offers the most OOP support and has simplest deployment? It doesn't necessarily need to be a scripting language; It can be in the form of a compiled executable, however it needs to be self contained--only ONE file, no DLL's and it cannot be declared to "include" other files. I cannot rely on the user having any .NET installed and it needs to be able to run on Windows 7 64 bit. By "most OOP support", I basically mean anything that has better OOP support than VBScript. A little context: Everything I have done thus far is in VBScript and writes a bunch of data into an .html file, which in the end is to be viewed by Internet Explorer. It also zips up a bunch of directories and files. It heavily relies on accessing the registry, file-system, and WMI (I can probably do without accessing WMI though, as long as I have good registry access). I can bring myself to code in any language so long as it meets me ridonkulous requirements stated above. I look forward to some good answers from those more experienced than I.

    Read the article

  • With Go, how to append unknown number of byte into a vector and get a slice of bytes?

    - by Stephen Hsu
    I'm trying to encode a large number to a list of bytes(uint8 in Go). The number of bytes is unknown, so I'd like to use vector. But Go doesn't provide vector of byte, what can I do? And is it possible to get a slice of such a byte vector? I intends to implement data compression. Instead of store small and large number with the same number of bytes, I'm implements a variable bytes that uses less bytes with small number and more bytes with large number. My code can not compile, invalid type assertion: 1 package main 2 3 import ( 4 //"fmt" 5 "container/vector" 6 ) 7 8 func vbEncodeNumber(n uint) []byte{ 9 bytes := new(vector.Vector) 10 for { 11 bytes.Push(n % 128) 12 if n < 128 { 13 break 14 } 15 n /= 128 16 } 17 bytes.Set(bytes.Len()-1, bytes.Last().(byte)+byte(128)) 18 return bytes.Data().([]byte) // <- 19 } 20 21 func main() { vbEncodeNumber(10000) } I wish to writes a lot of such code into binary file, so I wish the func can return byte array. I haven't find a code example on vector.

    Read the article

  • Patterns for non-layered applications

    - by Paul Stovell
    In Patterns of Enterprise Application Architecture, Martin Fowler writes: This book is thus about how you decompose an enterprise application into layers and how those layers work together. Most nontrivial enterprise applications use a layered architecture of some form, but in some situations other approaches, such as pipes and filters, are valuable. I don't go into those situations, focussing instead on the context of a layered architecture because it's the most widely useful. What patterns exist for building non-layered applications/parts of an application? Take a statistical modelling engine for a financial institution. There might be a layer for data access, but I expect that most of the code would be in a single layer. Would you still expect to see Gang of Four patterns in such a layer? How about a domain model? Would you use OO at all, or would it be purely functional? The quote mentions pipes and filters as alternate models to layers. I can easily imagine a such an engine using pipes as a way to break down the data processing. What other patterns exist? Are there common patterns for areas like task scheduling, results aggregation, or work distribution? What are some alternatives to MapReduce?

    Read the article

  • Copy a multi-dimentional array by Value (not by reference) in PHP.

    - by Simon R
    Language: PHP I have a form which asks users for their educational details, course details and technical details. When the form is submitted the page goes to a different page to run processes on the information and save parts to a database. HOWEVER(!) I then need to return the page back to the original page, where having access to the original post information is needed. I thought it would be simple to copy (by value) the multi-dimensional (md) $_POST array to $_SESSION['post'] session_start(); $_SESSION['post'] = $_POST; However this only appears to place the top level of the array into $_SESSION['post'] not doing anything with the children/sub-arrays. An abbreviated form of the md $_POST array is as follows: Array ( [formid] = 2 [edu] = Array ( ['id'] = Array ( [1] = new_1 [2] = new_2 ) ['nameOfInstitution'] = Array ( [1] = 1 [2] = 2 ) ['qualification'] = Array ( [1] = blah [2] = blah ) ['grade'] = Array ( [1] = blah [2] = blah ) ) [vID] = 61 [Submit] = Save and Continue ) If I echo $_SESSION['post']['formid'] it writes "2", and if I echo $_SESSION['post']['edu'] it returns "Array". If I check that edu is an array (is_array($_SESSION['post']['edu])) it returns true. If I echo $_SESSION['post']['edu']['id'] it returns array, but when checked (is_array($_SESSION['post']['edu]['id'])) it returns false and I cannot echo out any of the elements. How do I successfully copy (by value, not by reference) the whole array (including its children) to an new array?

    Read the article

  • StreamReader.EndOfStream produces IOException

    - by Ziplin
    I'm working on an application that accepts TCP connections and reads in data until an </File> marker is read and then writes that data to the filesystem. I don't want to disconnect, I want to let the client sending the data to do that so they can send multiple files in one connection. I'm using the StreamReader.EndOfStream around my outter loop, but it throws an IOException when the client disconnects. Is there a better way to do this? private static void RecieveAsyncStream(IAsyncResult ar) { TcpListener listener = (TcpListener)ar.AsyncState; TcpClient client = listener.EndAcceptTcpClient(ar); // init the streams NetworkStream netStream = client.GetStream(); StreamReader streamReader = new StreamReader(netStream); StreamWriter streamWriter = new StreamWriter(netStream); while (!streamReader.EndOfStream) // throws IOException { string file= ""; while (file!= "</File>" && !streamReader.EndOfStream) { file += streamReader.ReadLine(); } // write file to filesystem } listener.BeginAcceptTcpClient(RecieveAsyncStream, listener); }

    Read the article

  • c++ passing unknown type to a function and any Class type definition

    - by user259789
    I am trying to create a generic class to write and read Objects to/from file. Called it ActiveRecord class only has one method, which saves the class itself: void ActiveRecord::saveRecord(){ string fileName = "data.dat"; ofstream stream(fileName.c_str(), ios::out); if (!stream) { cerr << "Error opening file: " << fileName << endl; exit(1); } stream.write(reinterpret_cast<const char *> (this), sizeof(ActiveRecord)); stream.close(); } now I'm extending this class with User class: class User : public ActiveRecord { public: User(void); ~User(void); string name; string lastName; }; to create and save the user I would like to do something like: User user = User(); user.name = "John"; user.lastName = "Smith" user.save(); how can I get this ActiveRecord::saveRecord() method to take any object, and class definition so it writes whatever i send it: to look like: void ActiveRecord::saveRecord(foo_instance, FooClass){ string fileName = "data.dat"; ofstream stream(fileName.c_str(), ios::out); if (!stream) { cerr << "Error opening file: " << fileName << endl; exit(1); } stream.write(reinterpret_cast<const char *> (foo_instance), sizeof(FooClass)); stream.close(); } and while we're at it, what is the default Object type in c++. eg. in objective-c it's id in java it's Object in AS3 it's Object what is it in C++??

    Read the article

  • Checking inherited attributes in an 'ancestry' based SQL table

    - by Brendon Muir
    I'm using the ancestry gem to help organise my app's tree structure in the database. It basically writes a childs ancestor information to a special column called 'ancestry'. The ancestry column for a particular child might look like '1/34/87' where the parent of this child is 87, and then 87's parent is 34 and 34's is 1. It seems possible that we could select rows from this table each with a subquery that checks all the ancestors to see if a certain attribute it set. E.g. in my app you can hide an item and its children just by setting the parent element's visibility column to 0. I want to be able to find all the items where none of their ancestors are hidden. I tried converting the slashes to comma's with the REPLACE command but IN required a set of comma separated integers rather than one string with comma separated string numbers. It's funny, because I can do this query in two steps, e.g. retrieve the row, then take its ancestry column, split out the id's and make another query that checks that the id is IN that set of id's and that visibility isn't ever 0 and whala! But joining these into one query seems to be quite a task. Much searching has shown a few answers but none really do what I want. SELECT * FROM t1 WHERE id = 99; 99's ancestry column reads '1/34/87' SELECT * FROM t1 WHERE visibility = 0 AND id IN (1,34,87); kind of backwards, but if this returns no rows then the item is visible. Has anyone come across this before and come up with a solution. I don't really want to go the stored procedure route. It's for a rails app.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • pthread condition variables on Linux, odd behaviour.

    - by janesconference
    Hi. I'm synchronizing reader and writer processes on Linux. I have 0 or more process (the readers) that need to sleep until they are woken up, read a resource, go back to sleep and so on. Please note I don't know how many reader processes are up at any moment. I have one process (the writer) that writes on a resource, wakes up the readers and does its business until another resource is ready (in detail, I developed a no starve reader-writers solution, but that's not important). To implement the sleep / wake up mechanism I use a Posix condition value, pthread_cond_t. The clients call a pthread_cond_wait() on the variable to sleep, while the server does a pthread_cond_broadcast() to wake them all up. As the manual says, I surround these two calls with a lock/unlock of the associated pthread mutex. The condition variable and the mutex are initialized in the server and shared between processes through a shared memory area (because I'm not working with threads, but with separate processes) an I'm sure my kernel / syscall support it (because I checked _POSIX_THREAD_PROCESS_SHARED). What happens is that the first client process sleeps and wakes up perfectly. When I start the second process, it blocks on its pthread_cond_wait() and never wakes up, even if I'm sure (by the logs) that pthread_cond_broadcast() is called. If I kill the first process, and launch another one, it works perfectly. In other words, the condition variable pthread_cond_broadcast() seems to wake up only one process a time. If more than one process wait on the very same shared condition variable, only the first one manages to wake up correctly, while the others just seem to ignore the broadcast. Why this behaviour? If I send a pthread_cond_broadcast(), every waiting process should wake up, not just one (and, however, not always the same one).

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • How can get dtrace to run the traced command with non-root priviledges ?

    - by Gyom
    OS X lacks linux's strace, but it has dtrace which is supposed to be so much better. However, I miss the ability to do simple tracing on individual commands. For example, on linux I can write strace -f gcc hello.c to caputre all system calls, which gives me the list of all the filenames needed by the compiler to compile my program (the excellent memoize script is built upon this trick) I want to port memoize on the mac, so I need some kind of strace. What I actually need is the list of files gcc reads and writes into, so what I need is more of a truss. Sure enough can I say dtruss -f gcc hello.c and get somewhat the same functionality, but then the compiler is run with root priviledges, which is obviously undesirable (apart from the massive security risk, one issue is that the a.out file is now owned by root :-) I then tried dtruss -f sudo -u myusername gcc hello.c, but this feels a bit wrong, and does not work anyway (I get no a.out file at all this time, not sure why) All that long story tries to motivate my original question: how do I get dtrace to run my command with normal user priviledges, just like strace does in linux ? Edit: is seems that I'm not the only one wondering how to do this: question #1204256 is pretty much the same as mine (and has the same suboptimal sudo answer :-)

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • SQL placeholder in WHERE IN issue, inserted strings fail.

    - by Alastair Pitts
    As part of my jobs, I need to writes SQL queries to connect to our PI database. To generate a query, I need to pass an array of tags (essentially primary keys), but these have to be inserted as strings. As this will be a modular query and used for multiple tags, a placeholder is being used. The query relies upon the use of the WHERE IN statement, which is where the placeholder is, like below: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN (?) AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' The issue is the format in which the tags are to be passed in as. If I add them directly into the query (for testing), they go in the format (these are example numbers): '000000012','00000032','005050236','4560236' and the query looks like: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN ('000000012','00000032','005050236','4560236') AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' which works. If I try and add the same tags into the placeholder, the query fails. If I only add 1 tag, with no quotes (using the placeholder), the query works. Why is this happening? Is there anyway around it?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • hibernate c3p0 broken pipe

    - by raven_arkadon
    Hi, I'm using hibernate 3 with c3p0 for a program which constantly extracts data from some source and writes it to a database. Now the problem is, that the database might become unavailable for some reasons (in the simplest case: i simply shut it down). If anything is about to be written to the database there should not be any exception - the query should wait for all eternity until the database becomes available again. If I'm not mistaken this is one of the things the connection pool could do for me: if there is a problem with the db, just retry to connect - in the worst case for infinity. But instead i get a broken pipe exception, sometimes followed by connection refused and then the exception is passed to my own code, which shouldn't happen. Even if I catch the exception, how could i cleanly reinitialize hibernate again? (So far without c3p0 i simply built the session factory again, but i wouldn't be surprised if that could leak connections (or is it ok to do so?)). The database is Virtuoso open source edition. My hibernate.xml.cfg c3p0 config: <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="hibernate.c3p0.breakAfterAcquireFailure">false</property> <property name="hibernate.c3p0.acquireRetryAttempts">-1</property> <property name="hibernate.c3p0.acquireRetryDelay">30000</property> <property name="hibernate.c3p0.automaticTestTable">my_test_table</property> <property name="hibernate.c3p0.initialPoolSize">3</property> <property name="hibernate.c3p0.minPoolSize">3</property> <property name="hibernate.c3p0.maxPoolSize">10</property> btw: The test table is created and i get tons of debug output- so it seems it actually reads the config.

    Read the article

  • Cannot write to SD card -- canWrite is returning false

    - by Fizz
    Sorry for the ambiguous title but I'm doing the following to write a simple string to a file: try { File root = Environment.getExternalStorageDirectory(); if (root.canWrite()){ System.out.println("Can write."); File def_file = new File(root, "default.txt"); FileWriter fw = new FileWriter(def_file); BufferedWriter out = new BufferedWriter(fw); String defbuf = "default"; out.write(defbuf); out.flush(); out.close(); } else System.out.println("Can't write."); }catch (IOException e) { e.printStackTrace(); } But root.canWrite() seems to be returning false everytime. I am not running this off of an emulator, I have my android Eris plugged into my computer via USB and running the app off of my phone via Eclipse. Is there a way of giving my app permission so this doesn't happen? Also, this code seems to be create the file default.txt but what if it already exists, will it ignore the creation and just open it to write or do I have to catch something like FileAlreadyExists(if such an exception exists) which then just opens it and writes? Thanks for any help guys.

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • Append data to same text file using java

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Customer_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String Query1="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; String Query2="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; try { Statement stmt = null; ResultSet rs = null; Statement stmt1 = null; ResultSet rs1 = null; stmt = conn.createStatement(); stmt1 = conn.createStatement(); rs = stmt.executeQuery(Query1); rs1 = stmt1.executeQuery(Query2); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } The above code is working fine. it writes the content of "rs" resultset data in text file Now what i want is ,i need to append the the content in "rs2" resultset to the "same text file"(ie . i need to append "rs2" content with "rs" content in the same text file)..

    Read the article

  • How to correctly load dependent JavaScript files

    - by Vaibhav Garg
    I am trying to extent a website page that displays google maps with the LabeledMarker. Google Maps API defines a class called GMarker which is extended by the LabeledMarker. The problem is, I cant seem to load the LabeledMarker script properly, i.e. after the Google API loads and I get the 'GMarker not defined' error. What is the correct way to specify the scripts in such cases? I am using ASP.NET's ClientScript.RegisterClientScriptInclude() first for the google API url and then immediately after with the LabeledMarker script file. The initial google API loader writes further script links that load the actual GMarker class. Shouldnt all those scripts be executed before the next script block(LabeledMarker script) is processed. I have checked the generated HTML and the script blocks are emitted in the right order. <script src="google api url" type="text/javascript"></script> ... (the above scripts uses document.write() etc to append further script blocks/sources) ... <script src="Scripts/LabeledMarker.js" type="text/javascript"></script> Once again, the LabeledMarker.js seems to get executed before the google API finishes loading.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >