Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 561/626 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • What should be taught in a "Fundamentals of programming" course at university?

    - by Dervin Thunk
    I have started a new question (see here), because I think the topic is of importance in a more general form. The question is now: If you were a professor at a Computer Science Dept. in some university, what would make it into your course? This is a programming course, second term, first year computer science/computer engineering. Remember you have a limited amount of time, and students are of different levels of competence, and some may be scientists, but some will also go on to be programmers in companies of different kinds. You have to cater to all. Bonus: What language? (Although see this question for my current thoughts about this...) Maybe you want to attach a course outline from some university? See here for an even more general question about this. Answer: I can't really summarize this post... I guess it was too subjective. However, it looks like we have to cover the history of computing up to a certain extent, computer architecture (memory, registers, whatever), C, and finally some basic algos and data structures in a problem solving fashion. This will be the bare bones of the course. Thanks all. I will accept the most voted up answer to close the thread, as it should be done.

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • SqlCeResultSet Problem

    - by Vlad
    Hello, I have a SmartDevice project (.NetCF 2.0) configured to be tested on the USA Windows Mobile 5.0 Pocket PC R2 Emulator. My project uses SqlCe 3.0. Understanding that a SmartDevice project is "more carefull" with the device's memory I am using SqlCeResultSets. The result sets are strongly typed, autogenerated by Visual Studio 2008 using the custom tool MSResultSetGenerator. The problem I am facing is that the result set does not recognize any column names. The autogenerated code for the fields does not work. In the client code I am using InfoResultSet rs = new InfoResultSet(); rs.Open(); rs.ReadFirst(); string myFormattedDate = rs.MyDateColumn.ToString("dd/MM/yyyy"); When the execution on the emulator reaches the rs.MyDateColumn the application throws an System.IndexOutOfRangeException. Investigating the stack trace at System.Data.SqlServerCe.FieldNameLookup.GetOrdinal() at System.Data.SqlServerCe.SqlCeDataReader.GetOrdinal() I've tested the GetOrdinal method (in my autogenerated class that inherits SqlCeResultSet): this.GetOrdinal("MyDateColumn"); // throws an exception this.GetName(1); // returns "MyDateColumn" this.GetOrdinal(this.GetName(1)); //throws an exception :) [edit added] The table exists and it's filled with data. Using typed DataSets works like a charm. Regenerating the SqlCeResultSet does not solve the issue, the problem remains. The problem basically is that I am not able to access a column by it's name. The data can be accessed trough this.GetDateTime(1)using the column ordinal. The application fails executing this.GetOrdinal("MyDateColumn"). Also I have updated Visual Studio 2008 to Service Pack 1. Additionaly I am developing the project on a virtual machine with Windows XP SP 2, but in my opinion if the medium is virtual or not should have no effect on the developing. Am I doing something wrong or am I missing something? Thank you.

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • error: invalid type argument of '->' (have 'struct node')

    - by Roshan S.A
    Why cant i access the pointer "Cells" like an array ? i have allocated the appropriate memory why wont it act like an array here? it works like an array for a pointer of basic data types. #include<stdio.h> #include<stdlib.h> #include<ctype.h> #define MAX 10 struct node { int e; struct node *next; }; typedef struct node *List; typedef struct node *Position; struct Hashtable { int Tablesize; List Cells; }; typedef struct Hashtable *HashT; HashT Initialize(int SIZE,HashT H) { int i; H=(HashT)malloc(sizeof(struct Hashtable)); if(H!=NULL) { H->Tablesize=SIZE; printf("\n\t%d",H->Tablesize); H->Cells=(List)malloc(sizeof(struct node)* H->Tablesize); should it not act like an array from here on? if(H->Cells!=NULL) { for(i=0;i<H->Tablesize;i++) the following lines are the ones that throw the error { H->Cells[i]->next=NULL; H->Cells[i]->e=i; printf("\n %d",H->Cells[i]->e); } } } else printf("\nError!Out of Space"); } int main() { HashT H; H=Initialize(10,H); return 0; } The error I get is as in the title-error: invalid type argument of '->' (have 'struct node').

    Read the article

  • Array: Recursive problem cracked me up

    - by VaioIsBorn
    An array of integers A[i] (i 1) is defined in the following way: an element A[k] ( k 1) is the smallest number greater than A[k-1] such that the sum of its digits is equal to the sum of the digits of the number 4* A[k-1] . You need to write a program that calculates the N th number in this array based on the given first element A[1] . INPUT: In one line of standard input there are two numbers seperated with a single space: A[1] (1 <= A[1] <= 100) and N (1 <= N <= 10000). OUTPUT: The standard output should only contain a single integer A[N] , the Nth number of the defined sequence. Input: 7 4 Output: 79 Explanation: Elements of the array are as follows: 7, 19, 49, 79... and the 4th element is solution. I tried solving this by coding a separate function that for a given number A[k] calculates the sum of it's digits and finds the smallest number greater than A[k-1] as it says in the problem, but with no success. The first testing failed because of a memory limit, the second testing failed because of a time limit, and now i don't have any possible idea how to solve this. One friend suggested recursion, but i don't know how to set that. Anyone who can help me in any way please write, also suggest some ideas about using recursion/DP for solving this problem. Thanks.

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Java: autofiltering list?

    - by Jason S
    I have a series of items arriving which are used in one of my data structures, and I need a way to keep track of those items that are retained. interface Item {} class Foo implements Item { ... } class Baz implements Item { ... } class StateManager { List<Foo> fooList; Map<Integer, Baz> bazMap; public List<Item> getItems(); } What I want is that if I do the following: for (int i = 0; i < SOME_LARGE_NUMBER; ++i) { /* randomly do one of the following: * 1) put a new Foo somewhere in the fooList * 2) delete one or more members from the fooList * 3) put a new Baz somewhere in the bazMap * 4) delete one or more members from the bazMap */ } then if I make a call to StateManager.getItems(), I want to return a list of those Foo and Baz items, which are found in the fooList and the bazMap, in the order they were added. Items that were deleted or displaced from fooList and bazMap should not be in the returned list. How could I implement this? SOME_LARGE_NUMBER is large enough that I don't have the memory available to retain all the Foo and Baz items, and then filter them.

    Read the article

  • Is C++ (one of) the best language to learn at first

    - by AlexV
    C++ is one of the most used programming language in the world since like 25+ years. My first job as programmer was in C++ and I coded in C++ everyday for nearly 4 years. Now I do mostly PHP, but I will forever cherish this C++ background. C++ has helped me understand many "under the hood" features/behaviors/restrictions of many other (and different) programming languages like PHP and Delphi. I'm a full time programmer for 6+ years now and since I have a quite varied programming background I often get questions by "newbies" as where to start to become a "good" programmer. I think C++ is one of the best language to start with because it gives you a real usefull experience that will last and will teach you how things work under the hood. It's not the easier one to learn for a newbie, but in my opinion it's one that will reward in the long term. I would like to know your opinion on this matter to add to my arguments when I guide "newbies". After this introduction, here's my question : Is C++ (one of) the best language to learn at first for you. Since it's subjective, I've marked this question as community wiki. EDIT: This question is not about why Java (or C# or any other language) is better than C++ to start with, it's about what's make C++ a good choice or not a good choice to learn as one of your firsts languages. For example, for me C++ made me understand how the memory works. Now today in many languages everything is managed by the garbadge collector and some people don't even know that. I'm glad I know how it works underneath and I think it can help you to write better code.

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Does The Clear Method On A Collection Release The Event Subscriptions?

    - by DaveB
    I have a collection private ObservableCollection<Contact> _contacts; In the constructor of my class I create it _contacts = new ObservableCollection<Contact>(); I have methods to add and remove items from my collection. I want to track changes to the entities in my collection which implement the IPropertyChanged interface so I subscribe to their PropertyChanged event. public void AddContact(Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged += new PropertyChangedEventHandler(Contact_PropertyChanged); _contacts.Add(contact); } public void AddContact(int index, Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged += new PropertyChangedEventHandler(Contact_PropertyChanged); _contacts.Insert(index, contact); } When I remove an entity from the collection, I unsubscribe from the PropertyChanged event. I am told this is to allow the entity to be garbage collected and not create memory issues. public void RemoveContact(Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged -= Contact_PropertyChanged; _contacts.Remove(contact); } So, I hope this is all good. Now, I need to clear the collection in one of my methods. My first thought would be to call _contacts.Clear(). Then I got to wondering if this releases those event subscriptions? Would I need to create my own clear method? Something like this: public void ClearContacts() { foreach(Contact contact in _contacts) { this.RemoveContact(contact); } } I am hoping one of the .NET C# experts here could clear this up for me or tell me what I am doing wrong.

    Read the article

  • If setUpBeforeClass() fails, test failures are hidden in PHPUnit's JUnit XML output

    - by Adam Monsen
    If setUpBeforeClass() throws an exception, no failures or errors are reported in the PHPUnit's JUnit XML output. Why? Example test class: <?php class Test extends PHPUnit_Framework_TestCase { public static function setUpBeforeClass() { throw new \Exception('masks all failures in xml output'); } public function testFoo() { $this->fail('failing'); } } Command line: phpunit --verbose --log-junit out.xml Test.php Console output: PHPUnit 3.6.10 by Sebastian Bergmann. E Time: 0 seconds, Memory: 3.25Mb There was 1 error: 1) Test Exception: masks all failures in xml output /tmp/pu/Test.php:6 FAILURES! Tests: 0, Assertions: 0, Errors: 1. JUnit XML output: <?xml version="1.0" encoding="UTF-8"?> <testsuites> <testsuite name="Test" file="/tmp/phpunit-broken/Test.php"/> </testsuites> More info: $ php --version PHP 5.3.10-1ubuntu3.1 with Suhosin-Patch (cli) (built: May 4 2012 02:21:57) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with Xdebug v2.1.0, Copyright (c) 2002-2010, by Derick Rethans

    Read the article

  • Better use a tuple or numpy array for storing coordinates

    - by Ivan
    Hi, I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • function returns after an XMLHttpRequest

    - by ashays
    Alright, I know questions like this have probably been asked dozens of times, but I can't seem to find a working solution for my project. Recently, while using jQuery for a lot of AJAX calls, I've found myself in a form of callback hell. Whether or not jQuery is too powerful for this project is beyond the scope of this question. So basically here's some code that shows what's going on: function check_form(table) { var file = "/models/"+table+".json"; var errs = {}; var xhr = $.getJSON(file, function(json) { for (key in json) { var k = key; var r = json[k]; $.extend(errs, check_item("#"+k,r)); } }); return errs; } And... as you can probably guess, I get an empty object returned. My original idea was to use some sort of onReadyStateChange idea that would return whenever the readyState had finally hit 4. This causes my application to hang indefinitely, though. I need these errors to decide whether or not the form is allowed to submit or not (as well as to tell the user where the errors are in the application. Any ideas? Edit. It's not the prettiest solution, but I've managed to get it to work. Basically, check_form has the json passed to it from another function, instead of loading it. I was already loading it there, too, so it's probably best that I don't continue to load the same file over and over again anyways. I was just worried about overloading memory. These files aren't absolutely huge, though, so I guess it's probably okay.

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >