Search Results

Search found 108585 results on 4344 pages for 'test user'.

Page 40/4344 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • smartctl short test doesn't seem to complete

    - by Cédric COPY
    I am working on project which involve automated HDD testing through smartctl. The station is working fine on most product, but I have two specific products that fail the smartctl test. Those two product are both WD product (WD2500BUDT series) Smartctl behaviour is quite strange, in fact the test is launched without any problem, i wait about 2min (test length), and when i check the smartctl, i have got no result at all. It's like I hadn't launched any test (no fail, no success in smartctl result). No error return on command, nothing in syslog, .. As i said before, the test is working for other product, thousands products worked well with this test. The main smartctl command used are : smarctl -t shortest /dev/sdX #Launch test smartctl -l selftest /dev/sdX #Look at test result I have tried to use: smartctl -s on /dev/sdX or smartctl -o on /dev/sdX But doesn't change anything. The system is using Debian 6.0, smartctl v5.40 (rev 3124) x86_64, HDD are plug through SATA to PCI controller. I have 4 HDD connected at a time. Well if anyone has some hints to give with this problem, because I have no idea how can i fix this. Thanks in advance. PS: Not sure if it was a serverfault topic, sorry if i was wrong!

    Read the article

  • LAMP: How do I set up http://myservername.com/~user access?

    - by Travesty3
    Been trying to Google this, but I can't figure out good search terms to find any info about what I need, since I don't really know what it's called. I'm pretty much being thrown to the wolves to figure out how to set up a LAMP server. We had someone who knew how to do it, he set one up and then quit. It was set up so that when I went to "http://{myservername}.com/~travis" it showed the contents of my /home/travis/public_html folder. This worked fine, then we lost power and the server restarted (I know, battery backup, but this is a dev server in a dev building so it's OK). Now, the browser can't find that URL. I also need to know how to set this up on a new server, so instead of wasting time diagnosing this problem (probably just something dumb I did messing with settings or something), I really need to know how to set this up from scratch. Thanks for taking the time to read this and (hopefully) answer!

    Read the article

  • How can I make a Prism webapp look like Firefox to a website? (user agent spoofing)

    - by Alex Aaron Goven
    I thought it would be cool to use Mozilla's Prism to create a webapp for min.us, but drag and drop is disallowed because the site doesn't see the program as Firefox, Chrome or Safari, those of which are apparently the only browsers allowed to do drag and drop for fear that something will be horribly broken. I'm pretty sure Prism runs on the same engine as Firefox, yet I wouldn't doubt it if Prism is running on an older version since it's kind of a forgotten beta. Anyways, like the title says, I want to be able to make Prism webapps appear look like Firefox to websites to unlock awesome features. Also, if it can only be done with Fluid, then I answers regarding that will be fine. I'm not sure what engine it's running though.

    Read the article

  • How to find a user's (or mine) access rights on Windows Server 2008?

    - by Faiz
    I was given access to a Windows Server 2008 box and I need to check what all permissions I have on that box (if possible in the entire domain). I don't have access to domain controller and I don't want to write LDAP queries but just some GUI option or some command line stuff. Is there anyway? PS: I am not in to network administration, I am a BI developer. Pardon me if asked a stupid question.

    Read the article

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • Multiple asserts in single test?

    - by Gern Blandston
    Let's say I want to write a function that validates an email address with a regex. I write a little test to check my function and write the actual function. Make it pass. However, I can come up with a bunch of different ways to test the same function ([email protected]; [email protected]; test.test.com, etc). Do I put all the incantations that I need to check in the same, single test with several ASSERTS or do I write a new test for every single thing I can think of? Thanks!

    Read the article

  • Combining RequiresSTA and Timeout attributes on a test fails

    - by Peter Lillevold
    I have a test that opens and closes a WPF Window and thus requires the STA threading apartment. To safeguard the test against the window staying open (and thus hang the test indefinitely) I wanted to use the Timeout attribute. The problem is that applying the Timeout attribute causes the test to fail on timeout regardless of whether the test works or not. Without the attribute everything works fine. My theory is that Timeout causes the test to be executed on a new thread that does not inherit the STA apartment. Is there another way to have both STA and the timeout safeguard in NUnit? My test looks something like this: [Test, RequiresSTA, Timeout(300)] public void Construct() { var window = new WindowView(); window.Loaded += (sender, args) => window.Close(); var app = new Application(); app.Run(window); try { // ...run system under test } finally { app.Shutdown(); } }

    Read the article

  • Organizing test hierarchy in clojure project

    - by Sergey
    There are two directories in a clojure project - src/ and test/. There's a file my_methods.clj in the src/calc/ directory which starts with (ns calc.my_methods...). I want to create a test file for it in test directory - test/my_methods-test.clj (ns test.my_methods-test (:require [calc.my_methods]) (:use clojure.test)) In the $CLASSPATH there are both project root directory and src/ directory. But the exception is still "Could not locate calc/my_methods__init.class or calc/my_methods.clj on classpath". What is the problem with requiring it in the test file? echo $CLASSPATH gives this: ~/project:~/project/src

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • Spring - PropertiesPlaceholderConfigurer not finding properties file

    - by sat
    Not sure what could be wrong. I had an app that worked all along with this <context:property-placeholder location="classpath:my.properties"/> No problems finding the properties file and hooking things up. Now, I needed to encrypt some fields in the properties file. So I ended up writing the custom PropertiesPlaceholderConfigurer and tried to wire it up like this <bean class="com.mycompany.myapp.PropertiesPlaceholderConfigurer"> <property name="location" value="classpath:my.propeties"/> </bean> With this configuration, Spring complains that it cannot find the properties file. java.io.FileNotFoundException: class path resource [my.propeties] cannot be opened because it does not exist What in addition should be done? The custom placeholder configurer package com.mycompany.myapp; import org.springframework.beans.factory.config.PropertyPlaceholderConfigurer; import org.springframework.util.ObjectUtils; import java.util.Enumeration; import java.util.Properties; public class PropertiesPlaceholderConfigurer extends PropertyPlaceholderConfigurer{ @Override protected void convertProperties(Properties props) { Enumeration<?> propertyNames = props.propertyNames(); while (propertyNames.hasMoreElements()) { String propertyName = (String) propertyNames.nextElement(); String propertyValue = props.getProperty(propertyName); if(propertyName.endsWith("encrypted")){ System.out.println("Decrypting the property " + propertyName); String convertedValue = decrypt(propertyValue); System.out.println("Decrypted the property value to " + convertedValue); if (!ObjectUtils.nullSafeEquals(propertyValue, convertedValue)) { props.setProperty(propertyName, convertedValue); } } } } } Update: Forget my custom placeholder configurer, even the spring provided one has trouble if I replace with this <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location" value="classpath:my.propeties"/> </bean> What is context:property-placholder doing that the bean definition can't? Full stack trace java.lang.IllegalStateException: Failed to load ApplicationContext at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:99) at org.springframework.test.context.DefaultTestContext.getApplicationContext(DefaultTestContext.java:101) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:319) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:212) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:232) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:89) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:175) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) Caused by: org.springframework.beans.factory.BeanInitializationException: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [my.propeties] cannot be opened because it does not exist at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:89) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:265) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:162) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:609) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:121) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:100) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:250) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91) at org.springframework.test.context.DefaultTestContext.getApplicationContext(DefaultTestContext.java:101) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:319) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:212) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:232) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:89) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:175) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) Caused by: java.io.FileNotFoundException: class path resource [my.propeties] cannot be opened because it does not exist at org.springframework.core.io.ClassPathResource.getInputStream(ClassPathResource.java:158) at org.springframework.core.io.support.EncodedResource.getInputStream(EncodedResource.java:143) at org.springframework.core.io.support.PropertiesLoaderUtils.fillProperties(PropertiesLoaderUtils.java:98) at org.springframework.core.io.support.PropertiesLoaderSupport.loadProperties(PropertiesLoaderSupport.java:175) at org.springframework.core.io.support.PropertiesLoaderSupport.mergeProperties(PropertiesLoaderSupport.java:156) at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:80) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:265) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:162) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:609) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:121) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:100) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:250) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91) at org.springframework.test.context.DefaultTestContext.getApplicationContext(DefaultTestContext.java:101) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:319) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:212) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:232) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:89) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:175) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

    Read the article

  • Change the User Interface Language in Ubuntu

    - by Matthew Guay
    Would you like to use your Ubuntu computer in another language?  Here’s how you can easily change your interface language in Ubuntu. Ubuntu’s default install only includes a couple languages, but it makes it easy to find and add a new interface language to your computer.  To get started, open the System menu, select Administration, and then click Language Support. Ubuntu may ask if you want to update or add components to your current default language when you first open the dialog.  Click Install to go ahead and install the additional components, or you can click Remind Me Later to wait as these will be installed automatically when you add a new language. Now we’re ready to find and add an interface language to Ubuntu.  Click Install / Remove Languages to add the language you want. Find the language you want in the list, and click the check box to install it.  Ubuntu will show you all the components it will install for the language; this often includes spellchecking files for OpenOffice as well.  Once you’ve made your selection, click Apply Changes to install your new language.  Make sure you’re connected to the internet, as Ubuntu will have to download the additional components you’ve selected. Enter your system password when prompted, and then Ubuntu will download the needed languages files and install them.   Back in the main Language & Text dialog, we’re now ready to set our new language as default.  Find your new language in the list, and then click and drag it to the top of the list. Notice that Thai is the first language listed, and English is the second.  This will make Thai the default language for menus and windows in this account.  The tooltip reminds us that this setting does not effect system settings like currency or date formats. To change these, select the Text Tab and pick your new language from the drop-down menu.  You can preview the changes in the bottom Example box. The changes we just made will only affect this user account; the login screen and startup will not be affected.  If you wish to change the language in the startup and login screens also, click Apply System-Wide in both dialogs.  Other user accounts will still retain their original language settings; if you wish to change them, you must do it from those accounts. Once you have your new language settings all set, you’ll need to log out of your account and log back in to see your new interface language.  When you re-login, Ubuntu may ask you if you want to update your user folders’ names to your new language.  For example, here Ubuntu is asking if we want to change our folders to their Thai equivalents.  If you wish to do so, click Update or its equivalents in your language. Now your interface will be almost completely translated into your new language.  As you can see here, applications with generic names are translated to Thai but ones with specific names like Shutter keep their original name. Even the help dialogs are translated, which makes it easy for users around to world to get started with Ubuntu.  Once again, you may notice some things that are still in English, but almost everything is translated. Adding a new interface language doesn’t add the new language to your keyboard, so you’ll still need to set that up.  Check out our article on adding languages to your keyboard to get this setup. If you wish to revert to your original language or switch to another new language, simply repeat the above steps, this time dragging your original or new language to the top instead of the one you chose previously. Conclusion Ubuntu has a large number of supported interface languages to make it user-friendly to people around the globe.  And since you can set the language for each user account, it’s easy for multi-lingual individuals to share the same computer. Or, if you’re using Windows, check out our article on how you can Change the User Interface Language in Vista or Windows 7, too! Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyChange the User Interface Language in Vista or Windows 7Create a Samba User on UbuntuInstall Samba Server on UbuntuSee Which Groups Your Linux User Belongs To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • Google I/O 2010 - Creating positive user experiences

    Google I/O 2010 - Creating positive user experiences Google I/O 2010 - Beyond design: Creating positive user experiences Tech Talks John Zeratsky, Matt Shobe Good user experience isn't just about good design. Learn how to create a positive user experience by being fast, open, engaged, surprising, polite, and, well... being yourself. Chock full of examples from the web and beyond, this talk is a practical introduction for developers who are passionate about user experience but may not have a background in design. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 185 6 ratings Time: 52:11 More in Science & Technology

    Read the article

  • HTG Explains: What’s a Browser User Agent?

    - by Chris Hoffman
    Your browser sends its user agent to every website you connect to. We’ve written about changing your browser’s user agent before – but what exactly is a user agent, anyway? A user agent is a “string” – that is, a line of text – identifying the browser and operating system to the web server. This sounds simple, but user agents have become a mess over time. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • Pgagent startup script (under the postgres user)

    - by Dominique Guardiola
    Hello, I'm trying to make a clean startup script for pgagent I found one here but I don't see how I can change this : if start-stop-daemon --start --quiet --pidfile /var/run/pgagent.pid \ --exec /usr/bin/pgagent "hostaddr=127.0.0.1 dbname=postgres user=postgres \ password=XXXXXXX";then to launch something like this : su - postgres -c /usr/bin/pgagent "hostaddr=127.0.0.1 dbname=postgres user=postgres" in order to avoid to hard-code the PG password in the script. This is possible using the .pgpass file feature. It works when I'm logged under the postgres user. So my only problem left is how to launch this command under the postgres user tried to add --user=postgres in the call, like mentioned here but it does not work.

    Read the article

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user 'Bob'.(Tried both adduser and useradd commands) The username 'Bob' lists up in /etc/passwd file and I can 'su' into the user 'Bob'. So far so good. When I log out of schroot, and re-enter schroot, the user 'Bob' has vanished!! There is no mention of Bob in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • BI Applications Test Drive: Joint Partner+Oracle Go To Market Initiatives

    - by Mike.Hallett(at)Oracle-BI&EPM
     A challenge you may be facing is how to easily show the business value of BI to a set of customers.  The key we find to achieve this is to show best in class business analytic examples specific to a business person's role and needs - e.g. "HR analytics" for HR professionals, "Spend Analytics" for procurement professionals, and so on. We have created for you, our specialised partners, the ability to run Oracle BI Applications Test Drive Workshops for your customers. These are carefully scripted to allow a customer business person (usually not IT) to navigate for themselves around a series of dashboards and analysis targetted to show how BI can help their business and drive ROI. These Oracle BI Applications Test Drive kits (in English) are now downloadable from our OMS4P/OPN portal . See it by clicking on this link:http://www.oracle.com/partners/secure/marketing/bi-apps-test-drive-519829.htmlThis kit translation into Italian, French, Spanish and German will be added to this portal soon. NOTE: These are not designed for "training" customers: they really address the need for an effective call to action for any customer you talk to who is in the early stages of exploring their options and the business benefits of a BI project, especially if they are already an Oracle applications customer (eBusiness suite, Peoplesoft, Siebel, JDE). For more demand generation kits see another blog article "Joint Partner+Oracle Go To Market Initiatives: BI Customer Event Kits"

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • What technical test should I give to a job candidate

    - by Romain Braun
    I'm not sure if this is the right stackexhange website, but : I have three candidates coming in tomorrow. One has 15 years of experience in PHP, and the two others have about 1 year of experience in PHP/ frontend development. For the last ones I was thinking about a test where they would have to develop a web app allowing users to manage other users, as in : Display a list of users, display a single user, modify an user, and add extended properties to an user. This way it would feature html, css, js, ajax, php and SQL. Do you think this would be a good test? What test should I give to the first one? He needs something much more difficult, I guess. I'm also listening, if you have any advice/ideas about what makes a good developer, and what I should pay attention to in the guys' codes. I was also considering thinking outside of the box, more algorithm-related, and asked him to make the fastest function to tell if a number is a prime number, because there are a lot of optimizations you can apply to such a function. They have one day to do it.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >