Search Results

Search found 17940 results on 718 pages for 'algorithm design'.

Page 388/718 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Boost lambda: Invoke method on object

    - by ckarras
    I'm looking at boost::lambda as a way to to make a generic algorithm that can work with any "getter" method of any class. The algorithm is used to detect duplicate values of a property, and I would like for it to work for any property of any class. In C#, I would do something like this: class Dummy { public String GetId() ... public String GetName() ... } IEnumerable<String> FindNonUniqueValues<ClassT> (Func<ClassT,String> propertyGetter) { ... } Example use of the method: var duplicateIds = FindNonUniqueValues<Dummy>(d => d.GetId()); var duplicateNames = FindNonUniqueValues<Dummy>(d => d.GetName()); I can get the for "any class" part to work, using either interfaces or template methods, but have not found yet how to make the "for any method" part work. Is there a way to do something similar to the "d = d.GetId()" lambda in C++ (either with or without Boost)? Alternative, more C++ian solutions to make the algorithm generic are welcome too. I'm using C++/CLI with VS2008, so I can't use C++0x lambdas.

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • NHibernate correct way to reattach cached entity to different session

    - by Chris Marisic
    I'm using NHibernate to query a list of objects from my database. After I get the list of objects over them I iterate over the list of objects and apply a distance approximation algorithm to find the nearest object. I consider this function of getting the list of objects and apply the algorithm over them to be a heavy operation so I cache the object which I find from the algorithm in HttpRuntime.Cache. After this point whenever I'm given the supplied input again I can just directly pull the object from Cache instead of having to hit the database and traverse the list. My object is a complex object that has collections attached to it, inside the query where I return the full list of objects I don't bring back any of the sub collections eagerly so when I read my cached object I need lazy loading to work correctly to be able to display the object fully. Originally I tried using this to re-associate my cached object back to a new session _session.Lock(obj, LockMode.None); However when accessing the page concurrently from another instance I get the error Illegal attempt to associate a collection with two open sessions I then tried something different with _session.Merge(obj); However watching the output of this in NHProf shows that it is deleting and re-associating my object's contained collections with my object, which is not what I want although it seems to work fine. What is the correct way to do this? Neither of these seem to be right.

    Read the article

  • Calculating distance from latitude, longitude and height using a geocentric co-ordinate system

    - by Sarge
    I've implemented this method in Javascript and I'm roughly 2.5% out and I'd like to understand why. My input data is an array of points represented as latitude, longitude and the height above the WGS84 ellipsoid. These points are taken from data collected from a wrist-mounted GPS device during a marathon race. My algorithm was to convert each point to cartesian geocentric co-ordinates and then compute the Euclidean distance (c.f Pythagoras). Cartesian geocentric is also known as Earth Centred Earth Fixed. i.e. it's an X, Y, Z co-ordinate system which rotates with the earth. My test data was the data from a marathon and so the distance should be very close to 42.26km. However, the distance comes to about 43.4km. I've tried various approaches and nothing changes the result by more than a metre. e.g. I replaced the height data with data from the NASA SRTM mission, I've set the height to zero, etc. Using Google, I found two points in the literature where lat, lon, height had been transformed and my transformation algorithm is matching. What could explain this? Am I expecting too much from Javascript's double representation? (The X, Y, Z numbers are very big but the differences between two points is very small). My alternative is to move to computing the geodesic across the WGS84 ellipsoid using Vincenty's algorithm (or similar) and then calculating the Euclidean distance with the two heights but this seems inaccurate. Thanks in advance for your help!

    Read the article

  • CakePHP: Interaction between different files/classes

    - by Alexx Hardt
    Hey, I'm cloning a commercial student management system. Students use the frontend to apply for lectures, uni staff can modify events (time, room, etc). The core of the app will be the algortihm which distributes the seats to students. I already asked about it here: How to implement a seat distribution algorithm for uni lectures Now, I found a class for that algorithm here: http://www.phpclasses.org/browse/file/10779.html I put the 'class GA' into app/vendors. I need to write a 'class Solution', which represents one object (a child, and later a parent for the evolutionary process). I'll also have to write functions mutate(), crossover() and fitness(). fitness calculates a score of a solution, based on if there are overbooked courses etc; crossover() is the crazy monkey sex function which produces a child from two parents, and mutate() modifies a child after crossover. Now, the fitness()-function needs to access a few related models, and their find()-functions. It evaluates a solution's fitness by checking e.g. if there are overbooked courses, or unfulfilled wishes, and penalizes that. Where would I put the ga.php, solution.php and the three functions? ga.php has to access the functions, but the functions have to access the models. I also don't want to call any App::import()'s from within the fitness()-function, because it gets called many thousand times when the algorithm runs. Hope someone can help me. Thanks in advance =)

    Read the article

  • Largest triangle from a set of points

    - by Faken
    I have a set of random points from which i want to find the largest triangle by area who's verticies are each on one of those points. So far I have figured out that the largest triangle's verticies will only lie on the outside points of the cloud of points (or the convex hull) so i have programmed a function to do just that (using Graham scan in nlogn time). However that's where I'm stuck. The only way I can figure out how to find the largest triangle from these points is to use brute force at n^3 time which is still acceptable in an average case as the convex hull algorithm usually kicks out the vast majority of points. However in a worst case scenario where points are on a circle, this method would fail miserably. Dose anyone know an algorithm to do this more efficiently? Note: I know that CGAL has this algorithm there but they do not go into any details on how its done. I don't want to use libraries, i want to learn this and program it myself (and also allow me to tweak it to exactly the way i want it to operate, just like the graham scan in which other implementations pick up collinear points that i don't want).

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Efficient Multiplication of Varying-Length #s [Conceptual]

    - by Milan Patel
    Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm. I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this: a * b = a/2 * 2b if a is even a * b = (a-1)/2 * 2b + a if a is odd My pseudocode is: rpa(x, y){ if x is 1 return y if x is even return rpa(x/2, 2y) if x is odd return rpa((x-1)/2, 2y) + y } I have 3 questions: Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically... Can I apply the Master's Theorem to understand the complexity...? a = # subproblems in recursion = 1 (max 1 recursive call across all states) n / b = size of each subproblem = n / 1 - b = 1 (problem doesn't change size...?) f(n^d) = work done outside recursive calls = 1 - d = 0 (the addition when a is odd) a = 1, b^d = 1, a = b^d - complexity is in n^d*log(n) = log(n) this makes sense logically since we are halving the problem at each step, right? What might my professor mean by providing arbitrary length numbers "as strings". Why do that? Many thanks in advance

    Read the article

  • How can I implement a splay tree that performs the zig operation last, not first?

    - by Jakob
    For my Algorithms & Data Structures class, I've been tasked with implementing a splay tree in Haskell. My algorithm for the splay operation is as follows: If the node to be splayed is the root, the unaltered tree is returned. If the node to be splayed is one level from the root, a zig operation is performed and the resulting tree is returned. If the node to be splayed is two or more levels from the root, a zig-zig or zig-zag operation is performed on the result of splaying the subtree starting at that node, and the resulting tree is returned. This is valid according to my teacher. However, the Wikipedia description of a splay tree says the zig step "will be done only as the last step in a splay operation" whereas in my algorithm it is the first step in a splay operation. I want to implement a splay tree that performs the zig operation last instead of first, but I'm not sure how it would best be done. It seems to me that such an algorithm would become more complex, seeing as how one needs to find the node to be splayed before it can be determined whether a zig operation should be performed or not. How can I implement this in Haskell (or some other functional language)?

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • Python to Java translation

    - by obelix1337
    Hello, i get quite short code of algorithm in python, but i need to translate it to Java. I didnt find any program to do that, so i will really appreciate to help translating it. I learned python a very little to know the idea how algorithm work. The biggest problem is because in python all is object and some things are made really very confuzing like sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source)) and "self.adj" is like hashmap with multiple values which i have no idea how to put all together. Is any better collection for this code in java? code is: [CODE] class FlowNetwork(object): def __init__(self): self.adj, self.flow, = {},{} def add_vertex(self, vertex): self.adj[vertex] = [] def get_edges(self, v): return self.adj[v] def add_edge(self, u,v,w=0): self.adj[u].append((v,w)) self.adj[v].append((u,0)) self.flow[(u,v)] = self.flow[(v,u)] = 0 def find_path(self, source, sink, path): if source == sink: return path for vertex, capacity in self.get_edges(source): residual = capacity - self.flow[(source,vertex)] edge = (source,vertex,residual) if residual > 0 and not edge in path: result = self.find_path(vertex, sink, path + [edge]) if result != None: return result def max_flow(self, source, sink): path = self.find_path(source, sink, []) while path != None: flow = min(r for u,v,r in path) for u,v,_ in path: self.flow[(u,v)] += flow self.flow[(v,u)] -= flow path = self.find_path(source, sink, []) return sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source)) g = FlowNetwork() map(g.add_vertex, ['s','o','p','q','r','t']) g.add_edge('s','o',3) g.add_edge('s','p',3) g.add_edge('o','p',2) g.add_edge('o','q',3) g.add_edge('p','r',2) g.add_edge('r','t',3) g.add_edge('q','r',4) g.add_edge('q','t',2) print g.max_flow('s','t') [/CODE] result of this example is "5". algorithm find max flow in graph(linked list or whatever) from source vertex "s" to destination "t". Many thanx for any idea

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • Boost Include Files in VC++

    - by Dr. K
    For the last few years, I have been exclusively a C# developer. Previously, I developed in C++ and have a C++ application that I built about 3 years ago using VS2005. It made extensive use of the Boost libraries. I recently decided to brush off the old app and rebuild it in VS2008 with the latest version of Boost (the latest version with the "easy" installation program from BoostPro Computing), 1.39. Previously when I had the program running I was at 1.33. Also, the last time the program was running was at least 2 OS installations ago. The Boost installation is located on my machine at: "C:\Program Files\boost\boost_1_39". Anyway, I have done the following: Set the project's "Additional Include Directories" directory to "C:\Program Files\boost\boost_1_39" Added "C:\Program Files\boost\boost_1_39" to VS2008's Tools - Options - Projects and Solutions - VC++ Directories - Include Files I have a number of Boost includes in my stdafx.h file. The compiler fails upon attempting to open the first one - #include <boost/algorithm/string/string.hpp> I have confirmed that the above file is indeed located at "C:\Program Files\boost\boost_1_39\boost\algorithm\string\string.hpp" I continue to get: fatal error C1083: Cannot open include file: 'boost/algorithm/string/string.hpp': No such file or directory Any tips on what else to check would be greatly appreciated. Again, this is an application that compiled fine a few years ago, but the source has now been moved to a new machine/compiler.

    Read the article

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Why Does DreamWeaver CS5 Discriminate between File Extensions, Even After Modding Mime Types!?

    - by Sam
    Hi folks, Even After I forced DreamWeaver CS5 to allow opening of .ast extensions as a MIME type of php5, which DreamWeaver now opens and colors correctly as described here, I still have trouble figuring out why it still discriminates between the two file extensions! Symptoms: External Files & Design View I have a file foo.php which php includes other files (e.g. the php-combined css.php and js.php). Now, when opening foo.php all functions work perfectly: the external (included) php files are all recognised correctly. However, when I change foo.php foo.ast, and open it again, It does not recognise the files extensions anymore in the top bar. Also, I lose the Design / Live View functionality.** When I change foo.ast to foo.php, all works again! Anyone any clues of why there remains a a difference between one and other extension? Note1: I have added the .ast extension to these four files, next to .php: 1 C:\Users\Sam\AppData\Local\VirtualStore\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 2 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 3 C:\Users\Sam\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configuration\Extensions.txt 4 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\Extensions.txt Note2: sometimes, even .php files do not want to show in design view or live view. Could this be caused by a corrupted installation?

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Excel controls not visible for certain users

    - by Nossidge
    One of the users of an Excel program I've written is having a weird problem. None of the control objects (Command Button, ComboBox, etc.) are visible to him when he opens the file on his laptop. He is using Excel 2003, the same version I used to create the program, and enables macros using the pop-up when the file loads. I have Googled this, and have found these people who seem to be having the exact same problem, with various versions of Excel. Unfortunately, none of their questions were answered. I can't really explain it any better than this user: If I enter design mode and pull a control from the control toolbar onto a sheet all I see are the drag handles. When not in design mode I have to feel around with the mouse and can click the button which executes the button click code correctly and opens another sheet where again I have to feel around for the buttons to return me to the original sheet. The button I managed to click is now visible but as soon as I click anywhere on the sheet it disappears. I have verified that the visible property of the buttons is set and that the Show All Objects on the Options View tab is selected. If I pull buttons from the Forms toolbar onto a sheet they are visible. If I try to find Objects using F5 when not in design mode Excel reports no objects on the sheet. So, Super Users, can you help? UPDATE: Thanks for your replies, but much like the person in the ozgrid link, the problem has gone away. Not sure why it went, but I can confirm that the user rebooted again and also started up other Excel files that didn't contain controls in the interim. Perhaps that fixed it, or maybe it'll be back again. I'll keep udating with progress, and close if the problem doesn't reoccur for the next few days. Thanks again.

    Read the article

  • WPF Login Screen

    - by Asim Sajjad
    I have search on google about the login screens which are design in WPF , as I have to using one of the best login screen for my application, Is the any good login screen avaialble so that I can see them and choose one of them, I am having no idean about the good design of the login screen. Please help me, thanks in advance

    Read the article

  • Is there a RAD for Android ?

    - by mawg
    Anything to help design GUI like a paint program? (Delphi, VB, MSVC, QtBuilder, etc) And anything to help build packages, set permissions, etc? What is there out there to take the drudge work out of Android app creation and leave me free to concentrate on design and development?

    Read the article

  • Advantages of Migrating Flex3 App to Flex4

    - by Srirangan
    Hi guys, What are the advantages of migrating a Flex 3 app (Java backend, BlazeDS, Spring, Hibernate) to Flex 4? One of the biggest advantages of Flex 4 is design. Assuming that design / UI isn't a big driver for the business right now as compared to performance, what are the other factors we can highlight? We have implemented Cairngorm and Swiz on the App (with a gradual "roll out" of Cairngorm planned for the future). Any opinions? Thanks, Sri

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >