Search Results

Search found 29753 results on 1191 pages for 'best practices'.

Page 741/1191 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • Is there such thing as hardware encrypted raid disk?

    - by Dumitrescu Bogdan
    I have a server for which I want to protect the content. The server is located on a clients premises. Is there a way to encrypt the content of a RAID DISK (at hardware level) ? What I need is that the server will not be able to start as long as the required password is not provided (the encryption key) I will give the best answer to Miles, though the answer was not exactly to my question. But from all the comments, it seems that it cannot be done hardware or .. it cannot be done as I would like to.

    Read the article

  • Transfer page from internal to external

    - by Theo Gulland
    Afternoon all! Currently I have a website with a list of audio products (essentially a search engine for audio deals). http://www.soundplaza.co.uk Once you go to the details page, you can then press the 'view deal' button to go to providers site e.g. = http://www.soundplaza.co.uk/all-deals/113/bookshelf-speakers/acoustic-energy-1 This jump between two sites is a bit harsh and I would like to show a transition page, to simply ease them into another site and not scare them off. Within this tradition page I will have a simple loading gif and some graphics showing that your transferring. QUESTION: What is the best way to send the details (link, product name etc) to this transfer page, to then wait 5 seconds, to then move on to the desired link... this can in NO WAY damage my SEO, if anything rel="nofollow" would be great if possible. Currently I have seen that you can submit form to the transition page, then you can use php sleep and then php header to transfer... however I am not sure if php header will transfer SEO value tot he provider? Any opinions would be great! Thanks

    Read the article

  • GWB | Comment Spam On The Rise

    - by Geekswithblogs Administrator
    I don’t know a member on Geekswithblogs.net that is not frustrated with the amount of spam they get. It is a major problem that we have been dealing with for 6+ years and trying to come up with new ways to fight. As spammers get smarter, we have to continue to upgrade the tools we use to combat it. Just like any spam filter, sometimes good comments will get caught up. This has been a huge concern for some bloggers causing us to tame what we call spam and not spam. So this post is here just to state we know the spam problem is like a wave, sometimes it is not so bad, other times it gets worse. Right now it is worse. One measure we will take is a requirement for CAPTCHA soon if it continues since most members don’t clean up their spam via the admin tools (which are not the best tools, I know). Also I want to solicit a better approach from the members, what would you like the spam interface on GWB to be like? Be realistic cause we all want “Zero Spam, Good Comment live”. Related Tags: Geekswithblogs.net, Spam

    Read the article

  • Component-wise GLSL vector branching

    - by Gustavo Maciel
    I'm aware that it usually is a BAD idea to operate separately on GLSL vec's components separately. For example: //use instrinsic functions, they do the calculation on 4 components at a time. float dot = v1.x*v2.x + v1.y * v2.y + v1.z * v2.z; //NEVER float dot = dot(v1, v2); //YES //Multiply one by one is not good too, since the ALU can do the 4 components at a time too. vec3 mul = vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z); //NEVER vec3 mul = v1 * v2; I've been struggling thinking, are there equivalent operations for branching? For example: vec4 Overlay(vec4 v1, vec4 v2, vec4 opacity) { bvec4 less = lessThan(v1, vec4(0.5)); vec4 blend; for(int i = 0; i < 4; ++i) { if(less[i]) blend[i] = 2.0 * v1[i]*v2[i]; else blend[i] = 1.0 - 2.0 * (1.0 - v1[i])*(1.0 - v2[i]); } return v1 + (blend-v1)*opacity; } This is a Overlay operator that works component wise. I'm not sure if this is the best way to do it, since I'm afraid these for and if can be a bottleneck later. Tl;dr, Can I branch component wise? If yes, how can I optimize that Overlay function with it?

    Read the article

  • Windows Shares / NTFS permissions on folder redirection in Active Directory

    - by Shawn Gradwell
    A client has folder redirection in AD setup on each user's Home Folder set to the Z:\ drive as \server\share\username. A Group Policy redirects the user's Documents to the user's Home Folder with the option 'Grant the user to exclusive rights to Documents' selected. The share on the server has permissions for the relevant user security group with 'Full Control', but each user's folder only have NTFS permissions only for 'CREATOR OWNER' and 'Domain Admins'. Why can the different users access other user's folders? I thought the most restrictive permissions applied effectively between the share and the NTFS permissions. Also, this setup has been like this for years, and this client recently updated all client computers to Windows 7. What is the best way to setup this redirection now? I assume only in Group Policy, also Basic Redirection - to create a folder for each user under the root path?

    Read the article

  • Modify .htm files on wireless router

    - by mdeitrick
    What am I trying to do? I am attempting to access the .htm files on my wireless router to modify the look and feel of the Netgear GENIE webpage(s). What have I done? I've read several articles on eHow and Instructables that detail how to setup your router as an FTP server, since I figured the best way to access the files would be through FileZilla. Setting up an FTP server through my router doesn't sound like what I should do to accomplish my task... or perhaps it is? I've also read the documents provided by Netgear for getting started and setting up functionality on the router. Maybe I overlooked something? My specifications Netgear router WNR2000v3 FileZilla v3.8.1 UPDATE: Since someone voted to close this question I'll clarify what I'm asking for... At the present I am dissatisfied with the current UI/webpage look when logging into routerlogin.net. Furthermore I would like to make changes to the admin dashboard.

    Read the article

  • How to use batch rendering with an entity component system?

    - by Kiril
    I have an entity component system and a 2D rendering engine. Because I have a lot of repeating sprites (the entities are non-animated, the background is tile based) I would really like to use batch rendering to reduce calls to the drawing routine. What would be the best way to integrate this with an engtity system? I thought about creating and populating the sprite batche every frame update, but that will probably be very slow. A better way would be to add a reference to an entity's quad to the sprite batch at initialization, but that would mean that the entity factory has to be aware of the Rendering System or that the sprite batch has to be a component of some Cache entity. One case violates encapsulation pretty heavily, while the other forces a non-game object entity in the entity system, which I am not sure I like a lot. As for engine, I am using Love2D (Love2D website) and FEZ ( FEZ website) as entity system(so everything is in Lua). I am more interested in a generic pattern of how to properly implement that rather than a language/library specific solution. Thanks in advance!

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How can I lock my Mac when I walk away?

    - by schnapple
    This has got to be an easy, trivial question but as a new Mac user, how can I lock my Mac when I walk away? On Windows this is dead simple - Win+L. Or hit Ctrl-Alt-Del and select "Lock this Computer" The best thing I've found for the Mac is to rig the screensaver to require password on wake, set a hot corner to fire off the screen saver, and do that as I leave. Which feels really "Windows 3.1" to me. Is there a Win+L-style method to quickly lock my Mac when I walk away?

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • Is "as long as it works" the norm?

    - by q303
    Hi, My last shop did not have a process. Agile essentially meant they did not have a plan at all about how to develop or manage their projects. It meant "hey, here's a ton of work. Go do it in two weeks. We're fast paced and agile." They released stuff that they knew had problems. They didn't care how things were written. There were no code reviews--despite there being several developers. They released software they knew to be buggy. At my previous job, people had the attitude as long as it works, it's fine. When I asked for a rewrite of some code I had written while we were essentially exploring the spec, they denied it. I wanted to rewrite the code because code was repeated in multiple places, there was no encapsulation and it took people a long time to make changes to it. So essentially, my impression is this: programming boils down to the following: Reading some book about the latest tool/technology Throwing code together based on this, avoiding writing any individual code because the company doesn't want to "maintain custom code" Showing it and moving on to the next thing, "as long as it works." I've always told myself that next job I'm going to get a better shop. It never happens. If this is it, then I feel stuck. The technologies always change; if the only professional development here is reading the latest MS Press technology book, then what have you built in 10 years but a superficial knowledge of various technologies? I'm concerned about: Best way to have professional standards How to develop meaningful knowledge and experience in this situation

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • Capitalizing on JavaScript's prototypal inheritance

    - by keithjgrant
    JavaScript has a class-free object system in which objects inherit properties directly from other objects. This is really powerful, but it is unfamiliar to classically trained programmers. If you attempt to apply classical design patterns directly to JavaScript, you will be frustrated. But if you learn to work with JavaScript's prototypal nature, your efforts will be rewarded. ... It is Lisp in C's clothing. -Douglas Crockford What does this mean for a game developer working with canvas and HTML5? I've been looking over this question on useful design patterns in gaming, but prototypal inheritance is very different than classical inheritance, and there are surely differences in the best way to apply some of these common patterns. For example, classical inheritance allows us to create a moveableEntity class, and extend that with any classes that move in our game world (player, monster, bullet, etc.). Sure, you can strongarm JavaScript to work that way, but in doing so, you are kind of fighting against its nature. Is there a better approach to this sort of problem when we have prototypal inheritance at our fingertips?

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • JavaScript Class Patterns Revisited: Endgame

    - by Liam McLennan
    I recently described some of the patterns used to simulate classes (types) in JavaScript. But I missed the best pattern of them all. I described a pattern I called constructor function with a prototype that looks like this: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); and I mentioned that the problem with this pattern is that it does not provide any encapsulation, that is, it does not allow private variables. Jan Van Ryswyck recently posted the solution, obvious in hindsight, of wrapping the constructor function in another function, thereby allowing private variables through closure. The above example becomes: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Now we have prototypal inheritance and encapsulation. The important thing to understand is that the constructor, and the toString function both have access to the name and age private variables because they are in an outer scope and they become part of the closure.

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Application upgrade triggered from web application on Ubuntu/Linux

    - by Witek
    On my ubuntu server I have an application MyApp which runs as a daemon with its own user myapp. Then I have a web application MyPortal which runs in apace httpd as user www-data. This application serves a web page with a Redeploy MyApp button. When clicking this button I want to start the script redeploymyapp. This script stops the MyApp deamon, upgrades the application and starts the daemon again. The problem is, that the redeploymyapp script needs to be executed by the user myapp, while MyPortal is running as www-data. What is te best way to solve this problem?

    Read the article

  • What do I need to develop a PHP extension in lampp?

    - by Fernando Costa
    Actually I'm dealing with a trouble in my system, I have to delivery the system to clients and it was built in PHP, JS, ShellScript and SQL. I would like to encrypt the code or obfuscate it from eyes of others! Then someone from the community told me about build my own PHP extension, it sounds to me as a great idea, since it will not be with the main code of the system. But I have a problem doing this way, if a programmer get in the extensions and find it, all the hard work has gone. Then I'm here to ask again about this matter. What is the best way to hide my Business Logic from third parties? I know that has stuffs like IonCube, Zend Guard, and many others. But I'm looking something that I can build myself. Is PHP extension the right way to follow? Or some Half SaaS system, with dependencies (Business Logic) in a remote server? About the environment OS: Kernel Linux 2.6.37.1-1.2 - LAMPP (Apache 2.2, MySQL 5.5 PHP 5.3.8) In php generally the extension is located at /php/ext/ but in lampp I have no idea where it is, I just found a folder /opt/lampp/lib/php/extensions/ is that right place?

    Read the article

  • links for 2011-03-18

    - by Bob Rhubart
    Events Overview (tags: ping.fm entarch) No description available. (tags: ping.fm) Andrejus Baranovskis: SOA & E2.0 Partner Community Forum Slides Oracle ACE Director Andrejus Baranovskis shares slides from his presentation at the SOA & E2.0 Partner Community Forum in Netherlands. (tags: oracle otn oracleace soa enterprise2.0 webcenter) ODTUG Kaleidoscope 2011 - The Premier Conference for Oracle Fusion Middleware AMIS Technology blog Oracle ACE Director Lucas Jellema shares information on what he considers "the best event for anyone doing, dabbling in or considering doing Oracle Fusion Middleware." (tags: oracle otn oracleace odtug fusionmiddleware) Mark Rittman: ODTUG K-Scope 2011 Early Bird Deadline is Closing "The deadline for Early Bird registrations for Kscope is fast approaching [March 25]. If you want to attend at the discounted rate, sign up soon." - Oracle ACE Director Mark Rittman (tags: oracle otn oracleace odtug) Master Data Management and Cloud Computing (Oracle Master Data Management) "Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed." - David Butler (tags: oracle otn mdm cloud)

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • The simplest Ubuntu mail server

    - by John G.
    After days of trying all sorts of tutorials I finally found a simple solution (not necessary the best) for a functional ubuntu mail server: sudo aptitude install postfix next type sudo dpkg-reconfigure postfix and configure like this: Internet Site yourdomain.com john (type your ubuntu user) yourdomain.com, localhost.localdomain, localhost No 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/192.168.0.1/24 (192.198.0.1 replace with your server ip address) 0 + all next install mail-stack-delivery sudo aptitude install mail-stack-delivery At this point you have a working mail server. Next, I configured SquirrelMail and start sending and receaving mail. This configuration worked with both Apache and Nginx.

    Read the article

  • SQL Server 2000 need to prevent logons whilst performing a backup for a side by side migration

    - by pigeon
    I'm looking for a way to prevent logons from occurring in order to take a full backup of a Database to migrate from its current SQL Server 2000 instance to a new SQL 2005 instance. A friend of mine suggested running a script which would put the DB into a rollback state. Not being a DBA my DDL is very poor and running a script that I don't understand may not be the best idea. One option which might be easier is to simply detach and copy, to the new server. Any suggestions would be greatly appreciated.

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • Excel; exporting/importing different columns to different csv files

    - by Sisyphus
    Is there a way to batch export different columns to different csv files in excel on an OSX, I'm thinking something along the lines of possibly automator, applescript or bash. I've had a look play around with automator and so far no look. The best I have accomplished export the whole sheet, then use sed to strip out what I don't need, however this is terribly inefficient. Also, is there a method, to batch import multiple csv files into columns. Thanks in advance && sorry I didn't tag excel correctly it wouldn't allow me to create the excel:mac tag

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >