Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 594/672 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Correct use of Classloader (especially in Android)

    - by Sebi
    I read some documentations about classloaders, but im still not sure where and why they are needed. The Android API says: Loads classes and resources from a repository. One or more class loaders are installed at runtime. These are consulted whenever the runtime system needs a specific class that is not yet available in-memory. So if i understand this correct, there can be many classlaoders which are responsible for loading new classes. But how the system decides which to use? And in which situation should a developer instantiate a new classloader? In the Android API for Intent there is a method public void setExtrasClassLoader (ClassLoader loader) The description says: Sets the ClassLoader that will be used when unmarshalling any Parcelable values from the extras of this Intent. So can i define there a special classloader so that i can pass object with an Intent which are not defined in the receiving activity? An example: If activity A which is located in Project A (in Eclipse) defines an object which i want to send to Activity B in Project B using putExtra of the Intent object. If this object which is send over the Intent is not defined (source code in project B), then there is a NoClassDefFoundException. So can i use the method setExtraClassloader to avoid this exception? If yes, how can i decide which classloader object i have to pass? And how do I instantiate it correctly?

    Read the article

  • Transitioning from FlexBuilder 3 to FlashBuilder 4 ... there and back again.

    - by Robusto
    It's growing pains time again. Some of our stuff requires FlashBuilder 4 and some still requires FlexBuilder 3. Both are installed OK, and no projects use both IDEs. The trouble is, when I go back to work on a FlexBuilder 3 project it takes freakin' forever to build and I get weird errors like these: This doesn't seem to cause any identifiable problems except to throw up a modal dialog at various points in the build process, forcing user interaction. But I do notice that memory fills up fast in FB3 and generally FB3 starts behaving strangely and ultimately quits once it gets up over 700MB. This is only a temporary bridge situation until we get all projects into FB4, but "temporary" could mean weeks if not months. Does anyone have any advice for how to get through this bridge period? Is there anything I can do to make these two IDEs work and play well together? Failing that, does anyone know what "java.lang.String" is the "reason" for the problem? Does Eclipse have a resource bundle somewhere that is getting corrupted when i go back and forth between the two?

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • Convert asp.net webforms logic to asp.net MVC

    - by gmcalab
    I had this code in an old asp.net webforms app to take a MemoryStream and pass it as the Response showing a PDF as the response. I am now working with an asp.net MVC application and looking to do this this same thing, but how should I go about showing the MemoryStream as PDF using MVC? Here's my asp.net webforms code: private void ShowPDF(MemoryStream ms) { try { //get byte array of pdf in memory byte[] fileArray = ms.ToArray(); //send file to the user Page.Response.Cache.SetCacheability(HttpCacheability.NoCache); Page.Response.Buffer = true; Response.Clear(); Response.ClearContent(); Response.ClearHeaders(); Response.Charset = string.Empty; Response.ContentType = "application/pdf"; Response.AddHeader("content-length", fileArray.Length.ToString()); Response.AddHeader("Content-Disposition", "attachment;filename=TID.pdf;"); Response.BinaryWrite(fileArray); Response.Flush(); Response.Close(); } catch { // and boom goes the dynamite... } }

    Read the article

  • Java application design question

    - by ring bearer
    I have a hobby project, which is basically to maintain 'todo' tasks in the way I like. One task can be described as: public class TodoItem { private String subject; private Date dueBy; private Date startBy; private Priority priority; private String category; private Status status; private String notes; } As you can imagine I would have 1000s of todo items at a given time. What is the best strategy to store a todo item? (currently on an XML file) such that all the items are loaded quickly up on app start up(the application shows kind of a dashboard of all the items at start up)? What is the best way to design its back-end so that it can be ported to Android/or a J2ME based phone? Currently this is done using Java Swing. What should I concentrate on so that it works efficiently on a device where memory is limited? The application throws open a form to enter new todo task. For now, I would like to save the newly added task to my-todos.xml once the user presses "save" button. What are the common ways to append such a change to an existing XML file?(note that I don't want to read the whole file again and then persist)

    Read the article

  • pthread condition variables on Linux, odd behaviour.

    - by janesconference
    Hi. I'm synchronizing reader and writer processes on Linux. I have 0 or more process (the readers) that need to sleep until they are woken up, read a resource, go back to sleep and so on. Please note I don't know how many reader processes are up at any moment. I have one process (the writer) that writes on a resource, wakes up the readers and does its business until another resource is ready (in detail, I developed a no starve reader-writers solution, but that's not important). To implement the sleep / wake up mechanism I use a Posix condition value, pthread_cond_t. The clients call a pthread_cond_wait() on the variable to sleep, while the server does a pthread_cond_broadcast() to wake them all up. As the manual says, I surround these two calls with a lock/unlock of the associated pthread mutex. The condition variable and the mutex are initialized in the server and shared between processes through a shared memory area (because I'm not working with threads, but with separate processes) an I'm sure my kernel / syscall support it (because I checked _POSIX_THREAD_PROCESS_SHARED). What happens is that the first client process sleeps and wakes up perfectly. When I start the second process, it blocks on its pthread_cond_wait() and never wakes up, even if I'm sure (by the logs) that pthread_cond_broadcast() is called. If I kill the first process, and launch another one, it works perfectly. In other words, the condition variable pthread_cond_broadcast() seems to wake up only one process a time. If more than one process wait on the very same shared condition variable, only the first one manages to wake up correctly, while the others just seem to ignore the broadcast. Why this behaviour? If I send a pthread_cond_broadcast(), every waiting process should wake up, not just one (and, however, not always the same one).

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • document.onclick settimeout function javascript help

    - by Jamex
    Hi, I have a document.onclick function that I would like to have a delay. I can't seem to get the syntax right. my original code is <script type="text/javascript"> document.onclick=check; function check(e){do something} I tried the below, but that code is incorrect, the function did not execute and nothing happened. <script type="text/javascript"> document.onclick=setTimeout("check", 1000); function check(e){do something} I tried the next set, the function got executed, but no delay. <script type="text/javascript"> setTimeout(document.onclick=check, 1000); function check(e){do something} what is the correct syntax for this code. TIA Edit: The solutions were all good, my problem was that I use the function check to obtain the id of the element being clicked on. But after the delay, there is no "memory" of what was being clicked on, so the rest of the function does not get executed. Jimr wrote the short code to preserve clicked event. The code that is working is // Delay execution of event handler function "f" by "time" ms. document.onclick = makeDelayedHandler(check, 250); function makeDelayedHandler( f, time) { return function( e ) {setTimeout(function() {f( e );}, time ); }; } function check(e){ var click = (e && e.target) || (event && event.srcElement); . . . Thank you all.

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Putting BigDecimal data into HSQLDB test database using DbUnit

    - by Denise
    Hi everyone, I'm using Hibernate JPA in my backend. I am writing a unit test using JUnit and DBUnit to insert a set of data into an in-memory HSQL database. My dataset contains: <order_line order_line_id="1" quantity="2" discount_price="0.3"/> Which maps to an OrderLine Java object where the discount_price column is defined as: @Column(name = "discount_price", precision = 12, scale = 2) private BigDecimal discountPrice; However, when I run my test case and assert that the discount price returned equals 0.3, the assertion fails and says that the stored value is 0. If I change the discount_price in the dataset to be 0.9, it rounds up to 1. I've checked to make sure HSQLDB isn't doing the rounding and it definitely isn't because I can insert an order line object using Java code with a value like 5.3 and it works fine. To me, it seems like DBUtils is for some reason rounding the number I've defined. Is there a way I can force this to not happen? Can anyone explain why it might be doing this? Thanks!

    Read the article

  • How to handle error with content-disposition

    - by František Žiacik
    Hi, how should I handle an exception that occurs after sending a Content-Disposition header for an attachment? I'm trying to generate a report at server and send it as a file, but if an exception occurs during the report generation, the error message itself is sent to browser which still takes it as a content of a file and shows a Save As dialog. User cannot know there was an error generating report, saves the file which is in wrong format now. Is there a way to cancel the response with this header and redirect to an error page? Or what else can I do to inform user about the error? Probably I could generate the report first and only if there was no error send the headers, but I want the report render directly to the Response output stream so that it does not need to stay in memory. Here is my code: this.Response.ContentType = "application/octet-stream"; this.Response.AddHeader("Content-Disposition", @"attachment; filename=""" + item.Name + @""""); this.Response.Flush(); GenerateReportTo(this.Response.OutputStream); // Exception occurs Thanks for any suggestions

    Read the article

  • using volatile keyword

    - by sap
    As i understand, if we declare a variable as volatile, then it will not be stored in the local cache. Whenever thread are updating the values, it is updated to the main memory. So, other threads can access the updated value. But in the following program both volatile and non-volatile variables are displaying same value. The volatile variable is not updated for the second thread. Can anybody plz explain this why testValue is not changed. class ExampleThread extends Thread { private int testValue1; private volatile int testValue; public ExampleThread(String str){ super(str); } public void run() { if (getName().equals("Thread 1 ")) { testValue = 10; testValue1= 10; System.out.println( "Thread 1 testValue1 : " + testValue1); System.out.println( "Thread 1 testValue : " + testValue); } if (getName().equals("Thread 2 ")) { System.out.println( "Thread 2 testValue1 : " + testValue1); System.out.println( "Thread 2 testValue : " + testValue); } } } public class VolatileExample { public static void main(String args[]) { new ExampleThread("Thread 1 ").start(); new ExampleThread("Thread 2 ").start(); } } output: Thread 1 testValue1 : 10 Thread 1 testValue : 10 Thread 2 testValue1 : 0 Thread 2 testValue : 0

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • Float compile-time calculation not happening?

    - by Klaim
    A little test program: #include <iostream> const float TEST_FLOAT = 1/60; const float TEST_A = 1; const float TEST_B = 60; const float TEST_C = TEST_A / TEST_B; int main() { std::cout << TEST_FLOAT << std::endl; std::cout << TEST_C << std::endl; std::cin.ignore(); return 0; } Result : 0 0.0166667 Tested on Visual Studio 2008 & 2010. I worked on other compilers that, if I remember well, made the first result like the second result. Now my memory could be wrong, but shouldn't TEST_FLOAT have the same value than TEST_C? If not, why? Is TEST_C value resolved at compile time or at runtime? I always assumed the former but now that I see those results I have some doubts...

    Read the article

  • Is there a way to get number of connections in Signalr hub group?

    - by pajo
    Here is my problem, I want to track if user is online or offline and notify other clients about it. I'm using hubs and implemented both IConnected and IDisconnect interfaces. My idea was to send notification to all clients when hub detects connect or disconnect. By default when user refreshes page he will get new connection id and eventually previous connection will call disconnect notifying other clients user is offline even though he's actually online. I tired to use my own ConnectionIdFactory returning username for connection id but with multiple tabs opened at some point it will detect user connectionid disconnected and after that client side hub will try to unsuccessfully connect to the hub in endless loop wasting memory and cpu making browser almost unusable. I needed to fix it fast so I removed my factory and now I add every new connection to the group using username, so I can easily notify single user on all connections, but then I have problem of detecting if user is online or offline as I don't know how many active connection user is having. So I'm wondering is there a way to get number of connections in one group? Or if anybody has some better idea how to track when user goes offline? I'm using Signalr 0.4

    Read the article

  • Apache module, is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in apache module: 1. apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the apache process though has returned the status to root process but it is also executing parallely where it going through its global store and sending updates to the client, if any. So can a apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • How can I SETF an element in a tree by an accessor?

    - by Willi Ballenthin
    We've been using Lisp in my AI course. The assignments I've received have involved searching and generating tree-like structures. For each assignment, I've ended up writing something like: (defun initial-state () (list 0 ; score nil ; children 0 ; value 0)) ; something else and building my functions around these "states", which are really just nested lists with some loosely defined structure. To make the structure more rigid, I've tried to write accessors, such as: (defun state-score ( state ) (nth 2 state)) This works for reading the value (which should be all I need to do in a nicely functional world. However, as time crunches, and I start to madly hack, sometimes I want a mutable structure). I don't seem to be able to SETF the returned ...thing (place? value? pointer?). I get an error with something like: (setf (state-score *state*) 10) Sometimes I seem to have a little more luck writing the accessor/mutator as a macro: (defmacro state-score ( state ) `(nth 2 ,state)) However I don't know why this should be a macro, so I certainly shouldn't write it as a macro (except that sometimes it works. Programming by coincidence is bad). What is an appropriate strategy to build up such structures? More importantly, where can I learn about whats going on here (what operations affect the memory in what way)?

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >