Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 812/2416 | < Previous Page | 808 809 810 811 812 813 814 815 816 817 818 819  | Next Page >

  • Haskell: "how much" of a type should functions receive? and avoiding complete "reconstruction"

    - by L01man
    I've got these data types: data PointPlus = PointPlus { coords :: Point , velocity :: Vector } deriving (Eq) data BodyGeo = BodyGeo { pointPlus :: PointPlus , size :: Point } deriving (Eq) data Body = Body { geo :: BodyGeo , pict :: Color } deriving (Eq) It's the base datatype for characters, enemies, objects, etc. in my game (well, I just have two rectangles as the player and the ground right now :p). When a key, the characters moves right, left or jumps by changing its velocity. Moving is done by adding the velocity to the coords. Currently, it's written as follows: move (PointPlus (x, y) (xi, yi)) = PointPlus (x + xi, y + yi) (xi, yi) I'm just taking the PointPlus part of my Body and not the entire Body, otherwise it would be: move (Body (BodyGeo (PointPlus (x, y) (xi, yi)) wh) col) = (Body (BodyGeo (PointPlus (x + xi, y + yi) (xi, yi)) wh) col) Is the first version of move better? Anyway, if move only changes PointPlus, there must be another function that calls it inside a new Body. I explain: there's a function update which is called to update the game state; it is passed the current game state, a single Body for now, and returns the updated Body. update (Body (BodyGeo (PointPlus xy (xi, yi)) wh) pict) = (Body (BodyGeo (move (PointPlus xy (xi, yi))) wh) pict) That tickles me. Everything is kept the same within Body except the PointPlus. Is there a way to avoid this complete "reconstruction" by hand? Like in: update body = backInBody $ move $ pointPlus body Without having to define backInBody, of course.

    Read the article

  • Python Twisted Client Connection Lost

    - by MovieYoda
    I have this twisted client, which connects with a twisted server having an index. I ran this client from command-line. It worked fine. Now I modified it to run in loop (see main()) so that I can keep querying. But the client runs only once. Next time it simply says connection lost \n Connection lost - goodbye!. What am i doing wrong? In the loop I am reconnecting to the server, it that wrong? from twisted.internet import reactor from twisted.internet import protocol from settings import AS_SERVER_HOST, AS_SERVER_PORT # a client protocol class Spell_client(protocol.Protocol): """Once connected, send a message, then print the result.""" def connectionMade(self): self.transport.write(self.factory.query) def dataReceived(self, data): "As soon as any data is received, write it back." if data == '!': self.factory.results = '' else: self.factory.results = data self.transport.loseConnection() def connectionLost(self, reason): print "\tconnection lost" class Spell_Factory(protocol.ClientFactory): protocol = Spell_client def __init__(self, query): self.query = query self.results = '' def clientConnectionFailed(self, connector, reason): print "\tConnection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "\tConnection lost - goodbye!" reactor.stop() # this connects the protocol to a server runing on port 8090 def main(): print 'Connecting to %s:%d' % (AS_SERVER_HOST, AS_SERVER_PORT) while True: print query = raw_input("Query:") if query == '': return f = Spell_Factory(query) reactor.connectTCP(AS_SERVER_HOST, AS_SERVER_PORT, f) reactor.run() print f.results return if __name__ == '__main__': main()

    Read the article

  • Is it possible to receive SMS message on appWidget?

    - by cappuccino
    Is it possible to receive SMS message on appWidget? I saw android sample source(API Demos). In API Demos, ExampleAppWidgetProvider class extends AppWidgetProvider, not Activity. So, I guess it is impossible to regist SMS Receiver like this, rcvIncoming = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.i("telephony", "SMS received"); Bundle data = intent.getExtras(); if (data != null) { // SMS uses a data format known as a PDU Object pdus[] = (Object[]) data.get("pdus"); String message = "New message:\n"; String sender = null; for (Object pdu : pdus) { SmsMessage part = SmsMessage.createFromPdu((byte[])pdu); message += part.getDisplayMessageBody(); if (sender == null) { sender = part.getDisplayOriginatingAddress(); } } Log.i(sender, message); } } }; registerReceiver(rcvIncoming, new IntentFilter("android.provider.Telephony.SMS_RECEIVED")); My goal is to receive SMS message on my custom appWidget. Any help would be appreciated!!

    Read the article

  • How do I abort a socket.recv() from another thread in python?

    - by Samuel Skånberg
    I have a main thread that waits for connection. It spawns client threads that will echo the response from the client (telnet in this case). But say that I want to close down all sockets and all threads after some time, like after 1 connection. How would I do? If I do clientSocket.close() from the main thread, it won't stop doing the recv. It will only stop if I first send something through telnet, then it will fail doing further sends and recvs. My code look like this: # Echo server program import socket from threading import Thread import time class ClientThread(Thread): def __init__(self, clientSocket): Thread.__init__(self) self.clientSocket = clientSocket def run(self): while 1: try: # It will hang here, even if I do close on the socket data = self.clientSocket.recv(1024) print "Got data: ", data self.clientSocket.send(data) except: break self.clientSocket.close() HOST = '' PORT = 6000 serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) serverSocket.bind((HOST, PORT)) serverSocket.listen(1) clientSocket, addr = serverSocket.accept() print 'Got a new connection from: ', addr clientThread = ClientThread(clientSocket) clientThread.start() time.sleep(1) # This won't make the recv in the clientThread to stop immediately, # nor will it generate an exception clientSocket.close()

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Do I really need bindParam?

    - by sandelius
    Hi there! I'm trying to do a little PDO CRUD to learn some PDO. I have a question about bindParam. Here's my update method right now: public static function update($conditions = array(), $data = array(), $table = '') { self::instance(); // Late static bindings (PHP 5.3) $table = ($table === '') ? self::table() : $table; // Check which data array we want to use $values = (empty($data)) ? self::$_fields : $data; $sql = "UPDATE $table SET "; foreach ($values as $f => $v) { $sql .= "$f = ?, "; } // let's build the conditions self::build_conditions($conditions); // fix our WHERE, AND, OR, LIKE conditions $extra = self::$condition_string; // querystring $sql = rtrim($sql, ', ') . $extra; // let's merge the arrays into on $v_val = array_values($values); $c_val = array_values($conditions); $array = array_merge($v_val, self::$condition_array); $stmt = self::$db->prepare($sql); return $stmt->execute($array); } in my "self::$condition_array" I get all the right values from the ?. SO the query looks like this: UPDATE table SET this = ?, another = ? WHERE title = ? AND time = ? as you can see I dont use bindParams instead I pass the right values in the right order ($array) directly into the execute($array) method. This works like a charm BUT is it safe not use use bindParam here? If not then how can I do it? Thanks from Sweden Tobias

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • jquery ajax post and response evaluation

    - by Rami
    Hello people, i am relatively new to javascript and jquery in particular, so please bear with me, i am trying to loop through multiple s and then serialize() the data with jquery and post it using ajax to my page, this's happening alright, and the data is posted, and my php script echos 1 and everything is taken care off, but for some strange reason, the following code is not working, specially the "success" variable, it's not increasing at all! would you please help me? $('.submitB').click(function(){ var success = 0; var times = 0; var alertText; $('.input').each(function(){ times++; var serializedForms = $(this).serialize(); $.post('<?=$this->config->site_url()?>crud/additem/forms', serializedForms ,function(data){ if (data) { success++; } }); }); if (times) { alertText = "?? ????? " + success + " ???? ?? ??? " + times + " ?????."; alert(alertText); } }) the Arabic text just says "success + Entries from + times + were entered successfully".. thank you in advance.

    Read the article

  • how can i learn Enterprise library 4.0 ?

    - by ykaratoprak
    I try to learn Enterprise Library. i find these useful codes to get data from sql. But i try to send data via parameter. also use UPDATE, DELETE, SAVe method. Do you give sama sample? i'm using enterprise 4.0 !!! using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Data; namespace WebApplicationForEnterpirires { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Database objdbase = DatabaseFactory.CreateDatabase("connectionString"); DataSet ds = objdbase.ExecuteDataSet(CommandType.StoredProcedure, "sp_GetProducts"); GridView1.DataSource = ds; GridView1.DataBind(); } } }

    Read the article

  • [android] Is it possible to receive SMS message on appWidget?

    - by cappuccino
    [Android] Hi, everyone. Is it possible to receive SMS message on appWidget? I saw android sample soucrce(API Demos). In API Demos, ExampleAppWidgetProvider class extends AppWidgetProvider, not Activity. So, I guess it is impossible to regist SMS Receiver like this, rcvIncoming = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.i("telephony", "SMS received"); Bundle data = intent.getExtras(); if (data != null) { // SMS uses a data format known as a PDU Object pdus[] = (Object[]) data.get("pdus"); String message = "New message:\n"; String sender = null; for (Object pdu : pdus) { SmsMessage part = SmsMessage.createFromPdu((byte[])pdu); message += part.getDisplayMessageBody(); if (sender == null) { sender = part.getDisplayOriginatingAddress(); } } Log.i(sender, message); } } }; registerReceiver(rcvIncoming, new IntentFilter("android.provider.Telephony.SMS_RECEIVED")); My goal is to receive SMS message on my custom appWidget. any help would be appreciated!!

    Read the article

  • how to determine if a character vector is a valid numeric or integer vector

    - by Andrew Barr
    I am trying to turn a nested list structure into a dataframe. The list looks similar to the following (it is serialized data from parsed JSON read in using the httr package). myList <- list(object1 = list(w=1, x=list(y=0.1, z="cat")), object2 = list(w=2, x=list(y=0.2, z="dog"))) unlist(myList) does a great job of recursively flattening the list, and I can then use lapply to flatten all the objects nicely. flatList <- lapply(myList, FUN= function(object) {return(as.data.frame(rbind(unlist(object))))}) And finally, I can button it up using plyr::rbind.fill myDF <- do.call(plyr::rbind.fill, flatList) str(myDF) #'data.frame': 2 obs. of 3 variables: #$ w : Factor w/ 2 levels "1","2": 1 2 #$ x.y: Factor w/ 2 levels "0.1","0.2": 1 2 #$ x.z: Factor w/ 2 levels "cat","dog": 1 2 The problem is that w and x.y are now being interpreted as character vectors, which by default get parsed as factors in the dataframe. I believe that unlist() is the culprit, but I can't figure out another way to recursively flatten the list structure. A workaround would be to post-process the dataframe, and assign data types then. What is the best way to determine if a vector is a valid numeric or integer vector?

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • Pointer initialization

    - by SoulBeaver
    Sorry if this question has been asked before. On my search through SO I didn't find one that asked what I wanted to know. Basically, when I have this: typedef struct node { int data; node *node; } *head; and do node *newItem = new node; I am under the impression that I am declaring and reserving space, but not defining, a pointer to struct node, is that correct? So when I do newItem->data = 100 and newItem->next = 0 I get confused. newItem = 0would declare what exactly? Both data and next? The object as a whole? I'm especially confused when I use typedef. Which part is the macro? I assume node because that's how I call it, but why do I need it? Finally, what happens when I do: node *temp; temp = new node; temp = head->next; head->next = newItem; newItem->next = temp; I mean, head-next is a pointer pointing to object newItem, so I assume not to newItem.data or next themselves. So how can I use an uninitialized pointer that I described above safely like here? is head now not pointing to an uninitialized pointer?

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • POST variables to web server?

    - by OverTheRainbow
    Hello I've been trying several things from Google to POST data to a web server, but none of them work: I'm still stuck at how to convert the variables into the request, considering that the second variable is an SQL query so it has spaces. Does someone know the correct way to use a WebClient to POST data? I'd rather use WebClient because it requires less code than HttpWebRequest/HttpWebResponse. Here's what I tried so far: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim wc = New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim postData = String.Format("db={0}&query={1}", _ HttpUtility.UrlEncode("books.sqlite"), _ HttpUtility.UrlEncode("SELECT id,title FROM boooks")) 'Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", postData) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub Dim client = New WebClient() Dim nv As New Collection nv.Add("db", "books.sqlite") nv.Add("query", "SELECT id,title FROM books") Dim address As New Uri("http://localhost:9999/get") 'Dim bytRetData As Byte() = client.UploadValues(address, "POST", nv) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub 'Dim wc As New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query 'Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", bytArguments) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub End Sub Thank you.

    Read the article

  • jquery noob problem with variables (scope?)

    - by aharon
    I'm trying to retrieve data from a php file named return that just contains <?php echo 'here is a string'; ?> I'm doing this through an html file containing <!DOCTYPE html> <html> <head> <script src="http://code.jquery.com/jquery-latest.min.js"></script> <script> var x; $.get("return.php", function(data){ x = data; }) function showAlert() {alert(x);} $(document).ready(function(){ alert(x); }); </script> </head> <body> <input type = "button" value = "Click here" onClick="showAlert();"> </body> </html> When the button is clicked it retrieves and displays the code fine, but on the $(document).ready thing, it displays "undefined" instead of the data in return.php. Any solutions? Thanks.

    Read the article

  • Singleton EXC_BAD_ACCESS

    - by lclaud
    Hi, so I have a class that I decleare as a singleton and in that class I have a NSMutableArray that contains some NSDictionaries with some key/value pairs in them. The trouble is it doesn't work and I dont't know why... I mean it crashes with EXC_BAD_ACCESS but i don't know where. I followed the code and it did create a new array on first add, made it to the end of the funtion ..and crashed ... @interface dataBase : NSObject { NSMutableArray *inregistrari; } @property (nonatomic,retain) NSMutableArray *inregistrari; -(void)adaugaInregistrareCuData:(NSDate *)data siValoare:(NSNumber *)suma caVenit:(BOOL)venit cuDetaliu:(NSString *)detaliu; -(NSDictionary *)raportIntreData:(NSDate *)dataInitiala siData:(NSDate *)dataFinala; -(NSArray *)luniDisponibileIntreData:(NSDate *)dataInitiala siData:(NSDate *)dataFinala; -(NSArray *)aniDisponibiliIntreData:(NSDate *)dataInitiala siData:(NSDate *)dataFinala; -(NSArray *)vectorDateIntreData:(NSDate *)dataI siData:(NSDate *)dataF; -(void)salveazaInFisier; -(void)incarcaDinFisier; + (dataBase *)shareddataBase; @end And here is the .m file #import "dataBase.h" #import "SynthesizeSingleton.h" @implementation dataBase @synthesize inregistrari; SYNTHESIZE_SINGLETON_FOR_CLASS(dataBase); -(void)adaugaInregistrareCuData:(NSDate *)data siValoare:(NSNumber *)suma caVenit:(BOOL)venit cuDetaliu:(NSString *)detaliu{ NSNumber *v=[NSNumber numberWithBool:venit]; NSArray *input=[NSArray arrayWithObjects:data,suma,v,detaliu,nil]; NSArray *keys=[NSArray arrayWithObjects:@"data",@"suma",@"venit",@"detaliu",nil]; NSDictionary *inreg=[NSDictionary dictionaryWithObjects:input forKeys:keys]; if(inregistrari == nil) { inregistrari=[[NSMutableArray alloc ] initWithObjects:inreg,nil]; }else { [inregistrari addObject:inreg]; } [inreg release]; [input release]; [keys release]; } It made it to the end of that adaugaInregistrareCuData ... ok . said the array had one object ... and then crashed

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • Trying to get django app to work with mod_wsgi on CentOS 5

    - by David
    I'm running CentOS 5, and am trying to get a django application working with mod_wsgi. I'm using .wsgi settings I got working on Ubuntu. Here is the error: [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Target WSGI script '/data/hosting/cubedev/apache/django.wsgi' cannot be loaded as Python module. [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Exception occurred processing WSGI script '/data/hosting/cubedev/apache/django.wsgi'. [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] Traceback (most recent call last): [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/data/hosting/cubedev/apache/django.wsgi", line 8, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] import django.core.handlers.wsgi [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 1, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from threading import Lock [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/threading.py", line 13, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from functools import wraps [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/functools.py", line 10, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from _functools import partial, reduce [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly And here is my .wsgi file import os import sys os.environ['PYTHON_EGG_CACHE'] = '/tmp/django/' os.environ['DJANGO_SETTINGS_MODULE'] = 'cube.settings' sys.path.append('/data/hosting/cubedev') import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • SQL UNION ALL with a INNER JOIN

    - by kOhm
    I'm looking for the best way to display all rows from two tables while joining first by one field (dwg) then where applicable a 2nd join on part. Table1 data consists of schematics(dwg) along with a list of parts required to build the item depicted in the drawing. Table2 consists of data about the actual parts ordered to build the schematic. Some parts in table2 are a combination of parts in table1 (ex: foo and bar in table1 were ordered as foobar in table2). I can display all rows in both tables with UNION ALL, but this doesn't join on both the dwg and part fields. I looked at FULL OUTER JOIN also, but I haven't figured out how to join first by dwg, then by part. Here is an example of the data. table1 table2 dwg part qty order dwg part qty ----- ----- ----- ----- ----- ----- ----- 123 foo 1 ord1 123 foobar 1 123 bar 1 ord1 123 bracket 2 123 widget 2 ord2 123 screw 4 123 bracket 4 ord2 123 nut 4 456 foo 1 ord2 123 widget 2 ord2 123 bracket 2 ord3 456 foo 1 Desired output: The goal is to create a view that provides visibility to all parts in table1 and the associated orders in table2 (including those parts that appear in one but not the other table) so that I can see all the drawing parts in table1 and the associated records in table2 along with records in table2 where the part wasn't in table1. part_request_order_report dwg part qty order part qty ----- ----- ----- ------ ----- ----- 123 foo 1 123 bar 1 123 widget 2 ord2 widget 2 123 bracket 4 ord1 bracket 2 123 bracket 4 ord2 bracket 2 123 ord1 foobar 1 123 ord1 screw 4 123 ord1 nut 4 456 foo 1 ord3 foo 1 Is this possible? Or am I better off iterating through the data to build the report table? Thanks in advance.

    Read the article

  • Explaining the forecasts from an ARIMA model

    - by Samik R.
    I am trying to explain to myself the forecasting result from applying an ARIMA model to a time-series dataset. The data is from the M1-Competition, the series is MNB65. For quick reference, I have a google doc spreadsheet with the data. I am trying to fit the data to an ARIMA(1,0,0) model and get the forecasts. I am using R. Here are some output snippets: > arima(x, order = c(1,0,0)) Series: x ARIMA(1,0,0) with non-zero mean Call: arima(x = x, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.9421 12260.298 s.e. 0.0474 202.717 > predict(arima(x, order = c(1,0,0)), n.ahead=12) $pred Time Series: Start = 53 End = 64 Frequency = 1 [1] 11757.39 11786.50 11813.92 11839.75 11864.09 11887.02 11908.62 11928.97 11948.15 11966.21 11983.23 11999.27 I have a few questions: (1) How do I explain that although the dataset shows a clear downward trend, the forecast from this model trends upward. This also happens for ARIMA(2,0,0), which is the best ARIMA fit for the data using auto.arima (forecast package) and for an ARIMA(1,0,1) model. (2) The intercept value for the ARIMA(1,0,0) model is 12260.298. Shouldn't the intercept satisfy the equation: C = mean * (1 - sum(AR coeffs)), in which case, the value should be 715.52. I must be missing something basic here. (3) This is clearly a series with non-stationary mean. Why is an AR(2) model still selected as the best model by auto.arima? Could there be an intuitive explanation? Thanks.

    Read the article

  • DB design for master file in enterprise software

    - by Thang Nguyen
    Dear all. I want to write an enterprise software and now I'm in the DB design phase. The software will have some master data such as Suppliers, Customers, Inventories, Bankers... I considering 2 options: Put each of these on one separate table. The advantage: the table will have all necessary information for that kind of master file (Customer: name, address,.../Inventory: Type, Manufacturer, Condition...). Disadvantage: Not flexible. When I want to have a new type of master data, such as Insurer, I have to design another table. Put all in one table and this table have foreign key to another table which have type of each kind of master data (table 1: id, data_type, code, name, address....; table 2: data_type, data_type_name). Advantage: flexible - if I want more master data such as Insurer, I just put in table 2: code: 002, name: Insurer, and then put detail each insurer into table 1). Disadvantage: table 1 must have sufficient field to store all kind of information including: customer name, address, account, inventory's manufacturer, inventory's quality...). So which method do you usually do (or you think work better). Thank you very much

    Read the article

  • Should I Solve this with Multithreading in Ruby?

    - by viatropos
    I have a strange case, here's the sequence of actions: User edits a document and hits save Application sends GET request to service Service sends POST request back to application in the middle of responding to the GET request Application, in the same state as when it made the GET request, responds to the POST request (sends document data) to service. Service sends data back to Application (responding to original GET request) Application handles the rest... The use case is this: I was thinking how can I make Yahoo Pipes POST data? Specifically, I want it to be able to update Google Docs when a user makes a change locally (on a custom editor). So user edits doc, makes GET request to Yahoo Pipes, Pipes makes a POST request back to App to get the document (Pipes can only make this type of POST request), App sends doc, Pipes formats data according to the Google API, Pipes responds to GET request with Google API formatted XML, App makes the post request. Theoretically, how would I accomplish this? It seems that I need to create a separate ruby Process for the GET request, and when Pipes sends the POST request, I find that process and send its output, then I'm stuck. This would cut out the need for a database for this particular case (I could save the stuff temporarily in a database, but that doesn't seem right). Any ideas? This would make it so I don't have to format things to the Google API in ruby, I could leave that to Pipes.

    Read the article

  • sorting a timer in matlab

    - by AP
    ok it seems like a simple problem, but i am having problem I have a timer for each data set which resets improperly and as a result my timing gets mixed. Any ideas to correct it? without losing any data. Example timer col ideally should be timer , mine reads 1 3 2 4 3 5 4 6 5 1 6 2 how do i change the colum 2 or make a new colum which reads like colum 1 without changing the order of ther rows which have data this is just a example as my file lengths are 86000 long , also i have missing timers which i do not want to miss , this imples no data for that period of time. thanks EDIT: I do not want to change the other columns. The coulm 1 is the gps counter and so it does not sync with the comp timer due to some other issues. I just want to change the row one such that it goes from high to low without effecting other rows. also take care of missing pts ( if i did not care for missing pts simple n=1: max would work.

    Read the article

< Previous Page | 808 809 810 811 812 813 814 815 816 817 818 819  | Next Page >