Search Results

Search found 3996 results on 160 pages for 'operations'.

Page 129/160 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • recvfrom returns invalid argument when *from* is passed

    - by Aditya Sehgal
    I am currently writing a small UDP server program in linux. The UDP server will receive packets from two different peers and will perform different operations based on from which peer it received the packet. I am trying to determine the source from where I receive the packet. However, when select returns and recvfrom is called, it returns with an error of Invalid Argument. If I pass NULL as the second last arguments, recvfrom succeeds. I have tried declaring fromAddr as struct sockaddr_storage, struct sockaddr_in, struct sockaddr without any success. Is their something wrong with this code? Is this the correct way to determine the source of the packet? The code snippet follows. ` /*TODO : update for TCP. use recv */ if((pkInfo->rcvLen=recvfrom(psInfo->sockFd, pkInfo->buffer, MAX_PKTSZ, 0, /* (struct sockaddr*)&fromAddr,*/ NULL, &(addrLen) )) < 0) { perror("RecvFrom failed\n"); } else { /*Apply Filter */ #if 0 struct sockaddr_in* tmpAddr; tmpAddr = (struct sockaddr_in* )&fromAddr; printf("Received Msg From %s\n",inet_ntoa(tmpAddr->sin_addr)); #endif printf("Packet Received of len = %d\n",pkInfo->rcvLen); } `

    Read the article

  • How can I find out if the MainActivity is being paused from my Java class?

    - by quinestor
    I am using motion sensor detection in my application. My design is this: a class gets the sensor services references from the main activity and then it implements SensorEventListener. That is, the MainActivity does not listen for sensor event changes: public void onCreate(Bundle savedInstanceState) { // ... code mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); mAccelerometer = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); // The following is my java class, it does not extends any android fragment/activty mShakeUtil = new ShakeUtil(mSensorManager,mAccelerometer,this); // ..more code.. } I can't redesign ShakeUtil so it is a fragment nor activity, unfortunately. Now to illustrate the problem consider: MainActivity is on its way to be destroyed/paused. I.e screen rotation ShakeUtil's onSensorChanged(SensorEvent event) gets called in the process.. One of the things that happen inside onSensorChanged is a dialog interaction, which gives the error: java.lang.IllegalStateException: Can not perform this action after onSaveInstanceState When the previous happens between MainActivity's onSaveInstanceState and onPause. I know this can be prevented if I successfully detect that MainActivity is being pause in ShakeUtil. How can I detect that MainActivity is being paused or onSaveInstanceState was called from ShakeUtil? Alternatively, how can I avoid this issue without making Shakeutil extend activity? So far I have tried with flag variables but that isn't good enough, I guess these are not atomic operations. I tried using Activity's isChangingConfigurations(), but I get an undocummented "NoSuchMethodFound" error.. I am unregistering the sensors by calling ShakeUtil when onPause in main ACtivity

    Read the article

  • Can i change the identity of the logged in user in ASP.net?

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work?

    Read the article

  • Is there any function in Matlab for changing the form of matrix?

    - by niko
    Hi, I have to get the unknown matrix by changing the form of known matrix considering the following rules: H = [-P'|I] G = [I|P] where H is known matrix G is unknown matrix which has to be calculated I is identity matrix So for example if we had a matrix H = [1 1 1 1 0 0; 0 0 1 1 0 1; 1 0 0 1 1 0] its form has to be changed to H = [1 1 1 1 0 0; 0 1 1 0 1 0; 1 1 0 0 0 1] so -P' = [1 1 1; 0 1 0; 1 1 0] and in case of binary matrices -P = P therefore G = [1 0 0 1 1 1; 0 1 0 0 1 0; 0 0 1 1 1 0] I know how to solve it on the paper by performing basic row operations but don't know how if there is any function already written in Matlab to calculate G from H or H from G by considering the above rules. I would be very thankful if anyone of you could suggest any method for solving the given problem. Thank you.

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Asp.net mvc inheritance controllers

    - by Ris90
    Hi, I'm studing asp.net mvc and in my test project I have some problems with inheritance: In my model I use inheritanse in few entities: public class Employee:Entity { /* few public properties */ } It is the base class. And descendants: public class RecruitmentOfficeEmployee: Employee { public virtual RecruitmentOffice AssignedOnRecruitmentOffice { get; set; } } public class ResearchInstituteEmployee: Employee { public virtual ResearchInstitute AssignedOnResearchInstitute { get; set; } } I want to implement a simple CRUD operations to every descedant. What is the better way to inplement controllers and views in descendants: - One controller per every descendant; - Controller inheritance; - Generic controller; - Generic methods in one controller. Or maybe there is an another way? My ORM is NHibernate, I have a generic base repository and every repository is its descedant. Using generic controller, I think, is the best way, but in it I will use only generic base repository and extensibility of the system will be not very good. Please, help the newbie)

    Read the article

  • Is locking on the requested object a bad idea?

    - by Quick Joe Smith
    Most advice on thread safety involves some variation of the following pattern: public class Thing { private static readonly object padlock = new object(); private string stuff, andNonsense; public string Stuff { get { lock (Thing.padlock) { if (this.stuff == null) this.stuff = "Threadsafe!"; } return this.stuff; } } public string AndNonsense { get { lock (Thing.padlock) { if (this.andNonsense == null) this.andNonsense = "Also threadsafe!"; } return this.andNonsense; } } // Rest of class... } In cases where the get operations are expensive and unrelated, a single locking object is unsuitable because a call to Stuff would block all calls to AndNonsense, degrading performance. And rather than create a lock object for each call, wouldn't it be better to acquire the lock on the member itself (assuming it is not something that implements SyncRoot or somesuch for that purpose? For example: public string Stuff { get { lock (this.stuff) { // Pretend that this is a very expensive operation. if (this.stuff == null) this.stuff = "Still threadsafe and good?"; } return this.stuff; } } Strangely, I have never seen this approach recommended or warned against. Am I missing something obvious?

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • How much is too much memory allocation in NDK?

    - by Maximus
    The NDK download page notes that, "Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on." I came from a C background and was excited to try to use the NDK to operate most of my OpenGL ES functions and any native functions related to physics, animation of vertices, etc... I'm finding that I'm relying quite a bit on Native code and wondering if I may be making some mistakes. I've had no trouble with testing at this point, but I'm curious if I may run into problems in the future. For example, I have game struct defined (somewhat like is seen in the San-Angeles example). I'm loading vertex information for objects dynamically (just what is needed for an active game area) so there's quite a bit of memory allocation happening for vertices, normals, texture coordinates, indices and texture graphic data... just to name the essentials. I'm quite careful about freeing what is allocated between game areas. Would I be safer setting some caps on array sizes or should I charge bravely forward as I'm going now?

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Java/Hibernate using interfaces over the entities.

    - by Dennetik
    I am using annoted Hibernate, and I'm wondering whether the following is possible. I have to set up a series of interfaces representing the objects that can be persisted, and an interface for the main database class containing several operations for persisting these objects (... an API for the database). Below that, I have to implement these interfaces, and persist them with Hibernate. So I'll have, for example: public interface Data { public String getSomeString(); public void setSomeString(String someString); } @Entity public class HbnData implements Data, Serializable { @Column(name = "some_string") private String someString; public String getSomeString() { return this.someString; } public void setSomeString(String someString) { this.someString = someString; } } Now, this works fine, sort of. The trouble comes when I want nested entities. The interface of what I'd want is easy enough: public interface HasData { public Data getSomeData(); public void setSomeData(Data someData); } But when I implement the class, I can follow the interface, as below, and get an error from Hibernate saying it doesn't know the class "Data". @Entity public class HbnHasData implements HasData, Serializable { @OneToOne(cascade = CascadeType.ALL) private Data someData; public Data getSomeData() { return this.someData; } public void setSomeData(Data someData) { this.someData = someData; } } The simple change would be to change the type from "Data" to "HbnData", but that would obviously break the interface implementation, and thus make the abstraction impossible. Can anyone explain to me how to implement this in a way that it will work with Hibernate?

    Read the article

  • Can Haskell's monads be thought of as using and returning a hidden state parameter?

    - by AJM
    I don't understand the exact algebra and theory behind Haskell's monads. However, when I think about functional programming in general I get the impression that state would be modelled by taking an initial state and generating a copy of it to represent the next state. This is like when one list is appended to another; neither list gets modified, but a third list is created and returned. Is it therefore valid to think of monadic operations as implicitly taking an initial state object as a parameter and implicitly returning a final state object? These state objects would be hidden so that the programmer doesn't have to worry about them and to control how they gets accessed. So, the programmer would not try to copy the object representing the IO stream as it was ten minutes ago. In other words, if we have this code: main = do putStrLn "Enter your name:" name <- getLine putStrLn ( "Hello " ++ name ) ...is it OK to think of the IO monad and the "do" syntax as representing this style of code? putStrLn :: IOState -> String -> IOState getLine :: IOState -> (IOState, String) main :: IOState -> IOState -- main returns an IOState we can call "state3" main state0 = putStrLn state2 ("Hello " ++ name) where (state2, name) = getLine state1 state1 = putStrLn state0 "Enter your name:"

    Read the article

  • Java string to double conversion.

    - by wretrOvian
    Hi, I've been reading up on the net about the issues with handling float and double types in java. Unfortunately, the image is still not clear. Hence, i'm asking here direct. :( My MySQL table has various DECIMAL(m,d) columns. The m may range from 5 to 30. d stays a constant at 2. Question 1. What equivalent data-type should i be using in Java to work (i.e store, retrieve, and process) with the size of the values in my table? (I've settled with double - hence this post). Question 2. While trying to parse a double from a string, i'm getting errors Double dpu = new Double(dpuField.getText()); for example - "1" -> java.lang.NumberFormatException: empty String "10" -> 1.0 "101" -> 10.0 "101." -> 101.0 "101.1" -> 101.0 "101.19" -> 101.1 What am i doing wrong? What is the correct way to convert a string to a double value? And what measures should i take to perform operations on such values?

    Read the article

  • Analyzing an IronPython Scope

    - by Vercinegetorix
    I'm trying to write C# code with an embedded IronPython script. Then want to analyze the contents of the script, i.e. list all variables, functions, class and their members/methods. There's an easy way to start, assuming I've got a scope defined and code executed in it already: dynamic variables=pyScope.GetVariables(); foreach (string v in variables) { dynamic dynamicV=pyScope.GetVariable(); /*seems to return everything. variables, functions, classes, instances of classes*/ } But how do I figure out what the type of a variable is? For the following python 'objects', dynamicV.GetType() will return different values: x=5 --system.Int32 y="asdf" --system.String def func():... --IronPython.Runtime.PythonFunction z=class() -- IronPython.Runtime.Types.OldInstance, how can I identify what the actual python class is? class NewClass -- throws an error, GetType() is unavailable. This is almost what I'm looking for. I could capture the exception thrown when unavailable and assume it's a class declaration, but that seems unclean. Is there a better approach? To discover the members/methods of a class it looks like I can use: ObjectOperations op = pyEngine.Operations; object instance = op.Call("className"); foreach (string j in op.GetMemberNames("className")) { object member=op.GetMember(instance, j); Console.WriteLine(member.GetType()); /*once again, GetType() provides some info about the type of the member, but returns null sometimes*/ } Also, how do I get the parameters to a method? Thanks!

    Read the article

  • Function Returning Negative Value

    - by Geowil
    I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior. I was wondering if someone would take a look at see if I am missing something. float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand) { return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8); } The variables are all contain randomly generated values: popRand1: between 1 and 30 popRand2: between 10 and 30 popRand3: between 50 and 100 pSRand: between 1 and 1000 pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above Edit: Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on: pPMax = (int)pPStore; pPStore is a float that holds popCalc's return. So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high? If the later I could just increase the division to 16 I think.

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • A little confused about MVC and where to put a database query

    - by jax
    OK, so my Joomla app is in MVC format. I am still a little confused about where to put certain operations, in the Controller or in the Model. This function below is in the controller, it gets called when &task=remove. Should the database stuff be in the Model? It does not seem to fit there because I have two models editapp (display a single application) and allapps (display all the applications), now which one would I put the delete operation in? /** * Delete an application */ function remove() { global $mainframe; $cid = JRequest::getVar( 'cid', array(), '', 'array' ); $db =& JFactory::getDBO(); //if there are items to delete if(count($cid)){ $cids = implode( ',', $cid ); $query = "DELETE FROM #__myapp_apps WHERE id IN ( $cids )"; $db->setQuery( $query ); if (!$db->query()){ echo "<script> alert('".$db->getErrorMsg()."');window.history.go(-1); </script>\n"; } } $mainframe->redirect( 'index.php?option=' . $option . '&c=apps'); } I am also confused about how the flow works. For example, there is a display() function in the controller that gets called by default. If I pass a task, does the display() function still run or does it go directly to the function name passed by $task?

    Read the article

  • Recursive Iterators

    - by soandos
    I am having some trouble making an iterator that can traverse the following type of data structure. I have a class called Expression, which has one data member, a List<object>. This list can have any number of children, and some of those children might be other Expression objects. I want to traverse this structure, and print out every non-list object (but I do want to print out the elements of the list of course), but before entering a list, I want to return "begin nest" and after I just exited a list, I want to return "end nest". I was able to do this if I ignored the class wherever possible, and just had List<object> objects with List<object> items if I wanted a subExpression, but I would rather do away with this, and instead have an Expressions as the sublists (it would make it easier to do operations on the object. I am aware that I could use extension methods on the List<object> but it would not be appropriate (who wants an Evaluate method on their list that takes no arguments?). The code that I used to generate the origonal iterator (that works) is: public IEnumerator GetEnumerator(){ return theIterator(expr).GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is List<object>)){ yield return " begin nest "; foreach (var item in (List<object>)root){ foreach (var item2 in theIterator(item)){ yield return item2; } } yield return " end nest "; } else yield return root; } A type swap of List<object> for expression did not work, and lead to a stackOverflow error. How should the iterator be implemented? Update: Here is the swapped code: public IEnumerator GetEnumerator() { return this.GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is Expression)) { yield return " begin nest "; foreach (var item in (Expression)root) { foreach (var item2 in theIterator(item)) yield return item2; } yield return " end nest "; } else yield return root; }

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Msysgit bash is horrendously slow in Windows 7

    - by Kevin L.
    I love git and use it on OS X pretty much constantly at home. At work, we use svn on Windows, but want to migrate to git as soon as the tools have fully matured (not just TortoiseGit, but also something akin the really nice Visual Studio integration provided by VisualSVN). But I digress... I recently installed msysgit on my Windows 7 machine, and when using the included version of bash, it is horrendously slow. And not just the git operations; clear takes about five seconds. AAAAH! Has anyone experienced a similar issue? Edit: It appears that msysgit is not playing nicely with UAC and might just be a tiny design oversight resulting from developing on XP or running Vista or 7 with UAC disabled; starting Git Bash using Run as administrator results in the lightning speed I see with OS X (or on 7 after starting Git Bash w/o a network connection - see @Gauthier answer). Edit 2: AH HA! See my answer.

    Read the article

  • Do I have to implement Add/Delete methods in my NHibernate entities ?

    - by Lisa
    This is a sample from the Fluent NHibernate website: Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection! Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ? public class Store { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<Product> Products { get; set; } public virtual IList<Employee> Staff { get; set; } public Store() { Products = new List<Product>(); Staff = new List<Employee>(); } public virtual void AddProduct(Product product) { product.StoresStockedIn.Add(this); Products.Add(product); } public virtual void AddEmployee(Employee employee) { employee.Store = this; Staff.Add(employee); } }

    Read the article

  • NHibernate: What are child sessions and why and when should I use them?

    - by stefando
    In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco). As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.) However I did not find documentation about this feature and behavior. Questions: Is my understanding of child sessions correct? How and in which scenarios should I use them? Are they documented somewhere? Could they be used for session "scoping"? (For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?) Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios? Thanks in advance for any info.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >