Search Results

Search found 11306 results on 453 pages for 'methods'.

Page 325/453 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Developer career feeling like going back in time every new job [closed]

    - by komediant
    Is there a good category for this question? My background is bachelor in ICT and for a hobby I am programming already since I was around twelve I think. Started with QBasic, Pascal, C, Java et cetera. Currently I am working for about eight/nine years. Half academics/medical and half company world. A few years ago I started with frameworks and I began with Grails (underlying Spring/Hibernate), which was a heavenly job, very productive and no hassle. My previous job I developed in pure Spring/Hibernate Java, which was a bit more writing annotations and XML and no conventions like Grails. But still, I did like Spring/Hibernate a lot and the professional setup with a developmentstreet, versioning, Jenkins/Sonar, log4j and a good IDE like IntellIJ. It felt quite 'clear' and organised, although I knew Grails which felt a bit more productive. But...at my current job almost half the code is pure servlet, hard coded JDBC (connections handled by yourself), scriptlets in all JSP pages, no service layer, no versioning, no Maven, HTML in DAO-layer, JAR-hell, no hot swap deployment locally, every change you have to deploy and hope it works fine on the server. All local development needs ugly scriptlet tags to check which environment it is running. Et cetera. Now and then developers work over in the evening - I don't - and still lots of issues are not solved and new projects are waiting. I hear the developers complaining, but somehow they feel like what they have now is "advanced" or they are in a sort of comfore zone. The lead developer seems open for new things, but half of the times he says he can implement MVC-framework features himself instead of using what is already out there. So in short, I currently feel like I miss all the modern framework techniques and that the company is going so slow forward. I just work here for two months now. What I do now is also code some partially ugly stuff, but it goes in completely into my nature and I feel uncomfortable with it. Coding something takes long(er) than estimated and my manager complains about why it takes so long and I feel ashamed for myself needing so much time. Where I was used to just writing a query I now build up whole try catch methods. My manager knows my complaints and the developers do so too. There will come a meeting to line out plans for 2013 on technology and the issues I and the company are facing. I am not looking for another job yet, it's close to wehre I live and the economy is fragile. Does anyone else have had this kind of career, like feeling going backwards witch technology? And how did you cope with it?

    Read the article

  • Object Oriented Design of a Small Java Game

    - by user2733436
    This is the problem i am dealing with. I have to make a simple game of NIM. I am learning java using a book so far i have only coded programs that deal with 2 classes. This program would have about 4 classes i guess including the main class. My problem is i am having a difficult time designing classes how they will interact with each other. I really want to think and use a object oriented approach. So the first thing i did was design the Pile CLASS as it seemed the easiest and made the most sense to me in terms of what methods go in it. Here is what i have got down for the Pile Class so far. package Nim; import java.util.Random; public class Pile { private int initialSize; public Pile(){ } Random rand = new Random(); public void setPile(){ initialSize = (rand.nextInt(100-10)+10); } public void reducePile(int x){ initialSize = initialSize - x; } public int getPile(){ return initialSize; } public boolean hasStick(){ if(initialSize>0){ return true; } else { return false; } } } Now i need help in designing the Player Class. By that i mean i am not asking for anyone to write code for me as that defeats the purpose of learning i was just wondering how would i design the player class and what would go on it. My guess is that the player class would contain method for choosing move for computer and also receiving the move human user makes. Lastly i am guessing in the Game class i am guessing the turns would be handeled. I am really lost right now so i was wondering if someone can help me think through this problem it would be great. Starting with the player class would be appreciated. I know there are some solutions for this problem online but i refuse to look at because i want to develop my own approach to such problems and i am confident if i can get through this problem i can solve other problems. I apologize if this question is a bit poor but in specific i need help in designing the Player class.

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • How should game objects be aware of each other?

    - by Jefffrey
    I find it hard to find a way to organize game objects so that they are polymorphic but at the same time not polymorphic. Here's an example: assuming that we want all our objects to update() and draw(). In order to do that we need to define a base class GameObject which have those two virtual pure methods and let polymorphism kicks in: class World { private: std::vector<GameObject*> objects; public: // ... update() { for (auto& o : objects) o->update(); for (auto& o : objects) o->draw(window); } }; The update method is supposed to take care of whatever state the specific class object needs to update. The fact is that each objects needs to know about the world around them. For example: A mine needs to know if someone is colliding with it A soldier should know if another team's soldier is in proximity A zombie should know where the closest brain, within a radius, is For passive interactions (like the first one) I was thinking that the collision detection could delegate what to do in specific cases of collisions to the object itself with a on_collide(GameObject*). Most of the the other informations (like the other two examples) could just be queried by the game world passed to the update method. Now the world does not distinguish objects based on their type (it stores all object in a single polymorphic container), so what in fact it will return with an ideal world.entities_in(center, radius) is a container of GameObject*. But of course the soldier does not want to attack other soldiers from his team and a zombie doesn't case about other zombies. So we need to distinguish the behavior. A solution could be the following: void TeamASoldier::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<TeamBSoldier*>(e)) // shoot towards enemy } void Zombie::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<Human*>(e)) // go and eat brain } but of course the number of dynamic_cast<> per frame could be horribly high, and we all know how slow dynamic_cast can be. The same problem also applies to the on_collide(GameObject*) delegate that we discussed earlier. So what it the ideal way to organize the code so that objects can be aware of other objects and be able to ignore them or take actions based on their type?

    Read the article

  • Getting a SecurityToken from a RequestSecurityTokenResponse in WIF

    - by Shawn Cicoria
    When you’re working with WIF and WSTrustChannelFactory when you call the Issue operation, you can also request that a RequestSecurityTokenResponse as an out parameter. However, what can you do with that object?  Well, you could keep it around and use it for subsequent calls with the extension method CreateChannelWithIssuedToken – or can you? public static T CreateChannelWithIssuedToken<T>(this ChannelFactory<T> factory, SecurityToken issuedToken);   As you can see from the method signature it takes a SecurityToken – but that’s not present on the RequestSecurityTokenResponse class. However, you can through a little magic get a GenericXmlSecurityToken by means of the following set of extension methods below – just call rstr.GetSecurityTokenFromResponse() – and you’ll get a GenericXmlSecurityToken as a return. public static class TokenHelper { /// <summary> /// Takes a RequestSecurityTokenResponse, pulls out the GenericXmlSecurityToken usable for further WS-Trust calls /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static GenericXmlSecurityToken GetSecurityTokenFromResponse(this RequestSecurityTokenResponse rstr) { var lifeTime = rstr.Lifetime; var appliesTo = rstr.AppliesTo.Uri; var tokenXml = rstr.GetSerializedTokenFromResponse(); var token = GetTokenFromSerializedToken(tokenXml, appliesTo, lifeTime); return token; } /// <summary> /// Provides a token as an XML string. /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static string GetSerializedTokenFromResponse(this RequestSecurityTokenResponse rstr) { var serializedRst = new WSFederationSerializer().GetResponseAsString(rstr, new WSTrustSerializationContext()); return serializedRst; } /// <summary> /// Turns the XML representation of the token back into a GenericXmlSecurityToken. /// </summary> /// <param name="tokenAsXmlString"></param> /// <param name="appliesTo"></param> /// <param name="lifetime"></param> /// <returns></returns> public static GenericXmlSecurityToken GetTokenFromSerializedToken(this string tokenAsXmlString, Uri appliesTo, Lifetime lifetime) { RequestSecurityTokenResponse rstr2 = new WSFederationSerializer().CreateResponse( new SignInResponseMessage(appliesTo, tokenAsXmlString), new WSTrustSerializationContext()); return new GenericXmlSecurityToken( rstr2.RequestedSecurityToken.SecurityTokenXml, new BinarySecretSecurityToken( rstr2.RequestedProofToken.ProtectedKey.GetKeyBytes()), lifetime.Created.HasValue ? lifetime.Created.Value : DateTime.MinValue, lifetime.Expires.HasValue ? lifetime.Expires.Value : DateTime.MaxValue, rstr2.RequestedAttachedReference, rstr2.RequestedUnattachedReference, null); } }

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • is a factory pattern to prevent multuple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquly define what the object is (what node, what edge etc), and the oens that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methdos, so these objects all act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approch which accepts a Builder object and construct the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquly defines the type of object (this is node A nto node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows it's state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overiding of serlization methods to make it work (ensure when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in it's place), as well as using a weakHashMap as if it was a weakHashSet to know rather an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to it's own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state taht is different then my memory; to tell my memory when it's own state is inconsistent. I believe this is possible despit their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect I'm posting).

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Is the Observer pattern adequate for this kind of scenario?

    - by Omega
    I'm creating a simple game development framework with Ruby. There is a node system. A node is a game entity, and it has position. It can have children nodes (and one parent node). Children are always drawn relatively to their parent. Nodes have a @position field. Anyone can modify it. When such position is modified, the node must update its children accordingly to properly draw them relatively to it. @position contains a Point instance (a class with x and y properties, plus some other useful methods). I need to know when a node's @position's state changes, so I can tell the node to update its children. This is easy if the programmer does something like this: @node.position = Point.new(300,300) Because it is equivalent to calling this: # Code in the Node class def position=(newValue) @position = newValue update_my_children # <--- I know that the position changed end But, I'm lost when this happens: @node.position.x = 300 The only one that knows that the position changed is the Point instance stored in the @position property of the node. But I need the node to be notified! It was at this point that I considered the Observer pattern. Basically, Point is now observable. When a node's position property is given a new Point instance (through the assignment operator), it will stop observing the previous Point it had (if any), and start observing the new one. When a Point instance gets a state change, all observers (the node owning it) will be notified, so now my node can update its children when the position changes. A problem is when this happens: @someNode.position = @anotherNode.position This means that two nodes are observing the same point. If I change one of the node's position, the other would change as well. To fix this, when a position is assigned, I plan to create a new Point instance, copy the passed argument's x and y, and store my newly created point instead of storing the passed one. Another problem I fear is this: somePoint = @node.position somePoint.x = 500 This would, technically, modify @node's position. I'm not sure if anyone would be expecting that behavior. I'm under the impression that people see Point as some kind of primitive rather than an actual object. Is this approach even reasonable? Reasons I'm feeling skeptical: I've heard that the Observer pattern should be used with, well, many observers. Technically, in this scenario there is only one observer at a time. When assigning a node's position as another's (@someNode.position = @anotherNode.position), where I create a whole new instance rather than storing the passed point, it feels hackish, or even inefficient.

    Read the article

  • How should I refactor switch statements like this (Switching on type) to be more OO?

    - by Taytay
    I'm seeing some code like this in our code base, and want to refactor it: (Typescript psuedocode follows): class EntityManager{ private findEntityForServerObject(entityType:string, serverObject:any):IEntity { var existingEntity:IEntity = null; switch(entityType) { case Types.UserSetting: existingEntity = this.getUserSettingByUserIdAndSettingName(serverObject.user_id, serverObject.setting_name); break; case Types.Bar: existingEntity = this.getBarByUserIdAndId(serverObject.user_id, serverObject.id); break; //Lots more case statements here... } return existingEntity; } } The downsides of switching on type are self-explanatory. Normally, when switching behavior based on type, I try to push the behavior into subclasses so that I can reduce this to a single method call, and let polymorphism take care of the rest. However, the following two things are giving me pause: 1) I don't want to couple the serverObject with the class that is storing all of these objects. It doesn't know where to look for entities of a certain type. And unfortunately, the identity of a type of ServerObject varies with the type of ServerObject. (So sometimes it's just an ID, other times it's a combination of an id and a uniquely identifying string, etc). And this behavior doesn't belong down there on those subclasses. It is the responsibility of the EntityManager and its delegates. 2) In this case, I can't modify the ServerObject classes since they're plain old data objects. It should be mentioned that I've got other instances of the above method that take a parameter like "IEntity" and proceed to do almost the same thing (but slightly modify the name of the methods they're calling to get the identity of the entity). So, we might have: case Types.Bar: existingEntity = this.getBarByUserIdAndId(entity.getUserId(), entity.getId()); break; So in that case, I can change the entity interface and subclasses, but this isn't behavior that belongs in that class. So, I think that points me to some sort of map. So eventually I will call: private findEntityForServerObject(entityType:string, serverObject:any):IEntity { return aMapOfSomeSort[entityType].findByServerObject(serverObject); } private findEntityForEntity(someEntity:IEntity):IEntity { return aMapOfSomeSort[someEntity.entityType].findByEntity(someEntity); } Which means I need to register some sort of strategy classes/functions at runtime with this map. And again, I darn well better remember to register one for each my my types, or I'll get a runtime exception. Is there a better way to refactor this? I feel like I'm missing something really obvious here.

    Read the article

  • Is this a valid implementation of the repository pattern?

    - by user1578653
    I've been reading up about the repository pattern, with a view to implementing it in my own application. Almost all examples I've found on the internet use some kind of existing framework rather than showing how to implement it 'from scratch'. Here's my first thoughts of how I might implement it - I was wondering if anyone could advise me on whether this is correct? I have two tables, named CONTAINERS and BITS. Each CONTAINER can contain any number of BITs. I represent them as two classes: class Container{ private $bits; private $id; //...and a property for each column in the table... public function __construct(){ $this->bits = array(); } public function addBit($bit){ $this->bits[] = $bit; } //...getters and setters... } class Bit{ //some properties, methods etc... } Each class will have a property for each column in its respective table. I then have a couple of 'repositories' which handle things to do with saving/retrieving these objects from the database: //repository to control saving/retrieving Containers from the database class ContainerRepository{ //inject the bit repository for use later public function __construct($bitRepo){ $this->bitRepo = $bitRepo; } public function getById($id){ //talk directly to Oracle here to all column data into the object //get all the bits in the container $bits = $this->bitRepo->getByContainerId($id); foreach($bits as $bit){ $container->addBit($bit); } //return an instance of Container } public function persist($container){ //talk directly to Oracle here to save it to the database //if its ID is NULL, create a new container in database, otherwise update the existing one //use BitRepository to save each of the Bits inside the Container $bitRepo = $this->bitRepo; foreach($container->bits as $bit){ $bitRepo->persist($bit); } } } //repository to control saving/retrieving Bits from the database class BitRepository{ public function getById($id){} public function getByContainerId($containerId){} public function persist($bit){} } Therefore, the code I would use to get an instance of Container from the database would be: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = $containerRepo->getById($id); Or to create a new one and save to the database: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = new Container(); $container->setSomeProperty(1); $bit = new Bit(); $container->addBit($bit); $containerRepo->persist($container); Can someone advise me as to whether I have implemented this pattern correctly? Thanks!

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • Windows Backup failed with error 0x807800C5

    - by Alexey Ivanov
    I setup backup of my Windows 8 laptop with Windows 7 File Recovery (known as Backup and Restore in Windows 7). Backup of files runs successfully. But if I try to create system image, it fails with error 0x807800C5: Error details on the dialog: The mounted backup volume is inaccessible. Error details in the system log: There was a failure in preparing the backup image of one of the volumes in the backup set. I save the backup to a network location, WD MyBookLive.   Edit: I tried some of the steps suggested in the various thread about this issue: Cleaned up the backup location: Removed MediaID.bin in the backup location. Removed folder <ComputerName> from WindowsImageBackup. Restarted backup resulted in the same error. However, the error dialog shows slightly different error message: The specified backup disk cannot be found. Performed System File Check by running sfc /scannow. It showed no errors. Running backup failed nevertheless. I tried searching Google for error code but I've found no solution so far. Update: I submitted technical support request to Microsoft. The first suggestion was to clean boot, but it didn't help. I pointed out that I had tried all the methods from the same problem on MS Answers, and nothing had helped.

    Read the article

  • MobaXTerm - SSH Key authentication

    - by Chip Sprague
    I have a key that I converted and works fine with Putty. I have tried these formats: ssh -p 1111 -i id_rsa [email protected] ssh -i id_rsa -p 1111 [email protected] The key is in the same folder as the MobaXTerm executable. Thanks! EDIT: [chip.client] $ ssh -p 1111 -i id_rsa [email protected] -v Warning: Identity file id_rsa not accessible: No such file or directory. OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 192.168.0.9 [192.168.0.100] port 1111. debug1: Connection established. debug1: identity file /home/chip/.ssh/id_rsa type -1 debug1: identity file /home/chip/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: checking without port identifier Warning: Permanently added '[192.168.0.100]:1111' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/chip/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). [01/09/2011 - 09:15.38] ~

    Read the article

  • ssh key error - Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

    - by user1963938
    Amazon Ec2 :: Redhat 6. 64 Bit I'm trying to follow the socks5 guidelines (http://www.catonmat.net/blog/linux-socks5-proxy/ ) to open a socks on one of our servers but unfortunately I got suck at step 1 . ssh -N -D 0.0.0.0:1080 localhost I get error Permission denied (publickey,gssapi-keyex,gssapi-with-mic). How do I fix it ? More debug info ssh -v -f -N -D 0.0.0.0:1080 localhost OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • MS NPS denying access, can't validate server certificate

    - by Fred Weston
    At my office we use a Cisco WLC2504 wireless controller and starting about a week ago we started having problems with users connecting to one of our secure wireless network. We are running AD on Windows Server 2008 R2 and use network policy server to control access to our wireless network. When I look at the logs in event viewer after a failed connection attempt I see an access reject message: Reason Code: 262 Reason: The supplied message is incomplete. The signature was not verified. Looking this up on Google I found this article: http://support.microsoft.com/kb/838502 I tried disabling server certificate validation on my computer and as soon as I did that I was able to connect to the network, so it seems that there is some sort of certificate validation issue. I'm not sure which certificate is unable to be validated or how to fix it. This used to work and stopped suddenly by itself so I am thinking a certificate may have expired. When I go to NPS Policies Network Policies My policy Constraints Auth methods Microsoft PEAP and view the properties, the certificae specified here expires in 2016, so doesn't seem as though this could be the problem. Any suggestions on how to troubleshoot this issue?

    Read the article

  • Can't get Postfix Admin to use Dovecot password hashing

    - by Paul
    I'm setting up Postfix Admin 2.91 and trying to use dovecot:SHA512-CRYPT for password hashing. In config.inc.php I have set: // dovecot:CRYPT-METHOD = use dovecotpw -s 'CRYPT-METHOD'. Example: dovecot:CRAM-MD5 // (WARNING: don't use dovecot:* methods that include the username in the hash - you won't be able to login to PostfixAdmin in this case) $CONF['encrypt'] = 'dovecot:SHA512-CRYPT'; // If you use the dovecot encryption method: where is the dovecotpw binary located? // for dovecot 1.x // $CONF['dovecotpw'] = "/usr/sbin/dovecotpw"; // for dovecot 2.x (dovecot 2.0.0 - 2.0.7 is not supported!) $CONF['dovecotpw'] = "/usr/sbin/doveadm pw"; I have also tried SHA256-CRYPT and MD5-CRYPT with same results (as I understand it, these do not include usernames in the hash) In running setup.php, I get the following message when trying to create an admin account: can't encrypt password with dovecotpw, see error log for details Server error log reports: 1624#0: *6 FastCGI sent in stderr: "PHP message: dovecotpw password encryption failed. PHP message: STDERR output: sh: 1: /usr/sbin/doveadm: not found" while reading response header from upstream <...> upstream: "fastcgi://unix:/var/run/php5-fpm.sock:" <...> A couple quick checks: # ll /usr/sbin/doveadm -rwxr-xr-x 1 root root 423264 Feb 13 23:23 /usr/bin/doveadm* # doveadm pw -l CRYPT MD5 MD5-CRYPT SHA SHA1 SHA256 SHA512 SMD5 SSHA SSHA256 SSHA512 PLAIN CLEAR CLEARTEXT PLAIN-TRUNC CRAM-MD5 SCRAM-SHA-1 HMAC-MD5 DIGEST-MD5 PLAIN-MD4 PLAIN-MD5 LDAP-MD5 LANMAN NTLM OTP SKEY RPA SHA256-CRYPT SHA512-CRYPT # doveadmin pw -s SHA512-CRYPT Enter new password: Retype new password: {SHA512-CRYPT}$6$<long string here>/ Using Dovecot 2.2, PHP 5.5, MariaDB 10, Postfix 2.11, nginx 1.6.0, Ubuntu 12.04.

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Redirect particular hostname from https to httpd in httpd/apache2

    - by webnothing
    I have a webserver that has an ssl certificate applied to a subdomain https://shop.mydomain.com. I also have the hostname http://mydomain.com that has no ssl certificate. When invoking https://mydomain.com, browsers issue a warning that a certificate could not be verified because the webserver is identifying itself as https://shop.mydomain.com. I would like all traffic that hits https://mydomain.com to be redirected to http://mydomain.com, and leave https://shop.mydomain.com as is. My httpd.conf file generally looks like this: < VirtualHost 122.11.11.21:80 > ServerName shop.mydomain.com .. regular old port 80 .. < /VirtualHost > < VirtualHost 122.11.11.21:443 > ServerName shop.mydomain.com .. SSL applies here .. < /VirtualHost > < VirtualHost 122.11.11.21:80 > ServerName mydomain.com .. regular old port 80 .. < /VirtualHost > It does not look as if I have SSL set up for https://mydomain.com yet one can invoke SSL mode and the browser identifies the connection as https://shop.mydomain.com. I need to redirect from https://mydomain.com because for some reason, Google has indexed my website with this url even though it shows a warning. I have tried various methods to get this to redirect and nothing has worked. Any help would be greatly appreciated.

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • Nginx upload PUT and POST

    - by w00t
    I am trying to make nginx accept POST and PUT methods to upload files. I have compiled nginx_upload_module-2.2.0. I can't find any how to. I simply want to use only nginx for this, no reverse proxy, no other backend and no php. Is this achievable? this is my conf: nginx version: nginx/1.2.3TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' --add-module=/usr/src/nginx-1.2.3/nginx_upload_module-2.2.0 server { listen 80; server_name example.com; location / { root /html; autoindex on; } location /upload { root /html; autoindex on; upload_store /html/upload 1; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$|^description$"; upload_cleanup 400 404 499 500-505; } } And as an upload form I'm trying to use the one listed at the end of this page: http://grid.net.ru/nginx/upload.en.html

    Read the article

  • scp error: "Permission denied (publickey). lost connection"

    - by Winston C. Yang
    I tried to scp an svn dump to savannah, but I got the following error at the end. Permission denied (publickey). lost connection The scp command and verbose output are below. Any ideas? [wcyang@be2-wireless-pittnet-60-37 ~]$ scp -v diffcolor-dump.bz2 [email protected]:/srv/download/diffcolor/ Executing: program /usr/bin/ssh host dl.sv.gnu.org, user wcyang, command scp -v -t /srv/download/diffcolor/ OpenSSH_5.2p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to dl.sv.gnu.org [140.186.70.73] port 22. debug1: Connection established. debug1: identity file /Users/wcyang/.ssh/identity type -1 debug1: identity file /Users/wcyang/.ssh/id_rsa type 1 debug1: identity file /Users/wcyang/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'dl.sv.gnu.org' is known and matches the RSA host key. debug1: Found key in /Users/wcyang/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/wcyang/.ssh/identity debug1: Offering public key: /Users/wcyang/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/wcyang/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). lost connection

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >