Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 119/274 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • Can't build and run an android test project created using "ant create test-project" when tested proj

    - by Mike
    I have a module that builds an app called MyApp. I have another that builds some testcases for that app, called MyAppTests. They both build their own APKs, and they both work fine from within my IDE. I'd like to build them using ant so that I can take advantage of continuous integration. Building the app module works fine. I'm having difficulty getting the Test module to compile and run. Using Christopher's tip from a previous question, I used android create test-project -p MyAppTests -m ../MyApp -n MyAppTests to create the necessary build files to build and run my test project. This seems to work great (once I remove an unnecessary test case that it constructed for me and revert my AndroidManifest.xml to the one I was using before it got replaced by android create), but I have two problems. The first problem: The project doesn't compile because it's missing libraries. $ ant run-tests Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1567 KB/s (288354 bytes in 0.179s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/gen [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin/classes -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:8: package com.google.gson does not exist [javac] import com.google.gson.JsonElement; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:9: package com.google.gson does not exist [javac] import com.google.gson.JsonParser; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:11: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class GsonTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:6: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:7: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:11: package com.google.gson does not exist [javac] import com.google.gson.JsonObject; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:15: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class HttpTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class LinksTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:5: package roboguice.util does not exist [javac] import roboguice.util.RoboAsyncTask; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:6: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class SafeAsyncTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource': class file for roboguice.inject.InjectResource not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView': class file for roboguice.inject.InjectView not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:18: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e = parser.parse(s); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:20: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e2 = parser.parse(s2); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.HttpTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getResources().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:62: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:82: cannot find symbol [javac] symbol : method assertTrue(java.lang.String,boolean) [javac] location: class com.myapp.test.HttpTest [javac] assertTrue(result[0], result[0].contains("Search")); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:87: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:90: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:117: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:120: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:27: cannot find symbol [javac] symbol : method assertTrue(boolean) [javac] location: class com.myapp.test.LinksTest [javac] assertTrue(m.matches()); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:28: cannot find symbol [javac] symbol : method assertEquals(java.lang.String,java.lang.String) [javac] location: class com.myapp.test.LinksTest [javac] assertEquals( map.get(url), m.group(1) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:27: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:65: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:74: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:105: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:113: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:144: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:154: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:187: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:11: cannot access roboguice.activity.GuiceListActivity [javac] class file for roboguice.activity.GuiceListActivity not found [javac] public class StoriesTest extends ActivityUnitTestCase<Stories> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:21: cannot access roboguice.application.GuiceApplication [javac] class file for roboguice.application.GuiceApplication not found [javac] setApplication( new MyApplication( getInstrumentation().getTargetContext() ) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:22: incompatible types [javac] found : com.myapp.activity.Stories [javac] required: android.app.Activity [javac] final Activity activity = startActivity(intent, null, null); [javac] ^ [javac] 39 errors [javac] 6 warnings BUILD FAILED /opt/local/android-sdk-mac/platforms/android-1.6/templates/android_rules.xml:248: Compile failed; see the compiler error output for details. Total time: 24 seconds That's not a hard problem to solve. I'm not sure it's the right thing to do, but I copied the missing libraries (roboguice and gson) from the MyApp/libs directory to the MyAppTests/libs directory and everything seems to compile fine. But that leads to the second problem, which I'm currently stuck on. The tests compile fine but they won't run: $ cp ../MyApp/libs/gson-r538.jar libs/ $ cp ../MyApp/libs/roboguice-1.1-SNAPSHOT.jar libs/ 0 10:23 /Users/mike/Projects/myapp/android/MyAppTests $ ant run-testsBuildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1396 KB/s (288354 bytes in 0.201s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] Note: /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java uses unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyAppTests/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyAppTests-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk onto default emulator or device... [exec] 1227 KB/s (94595 bytes in 0.075s) [exec] pkg: /data/local/tmp/MyAppTests-debug.apk [exec] Success run-tests: [echo] Running tests ... [exec] [exec] android.test.suitebuilder.TestSuiteBuilder$FailedToCreateTests:INSTRUMENTATION_RESULT: shortMsg=Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_RESULT: longMsg=java.lang.IllegalAccessError: Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_CODE: 0 BUILD SUCCESSFUL Total time: 38 seconds Any idea what's causing the "Class ref in pre-verified class resolved to unexpected implementation" error?

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • What is a "Public DNS registered IP address"?

    - by Emma
    I've been reading this ICANN agreement on new TLDs and it has this section on DNS service availability, which i don't completely understand: Refers to the ability of the group of listed-as-authoritative name servers of a particular domain name (e.g., a TLD), to answer DNS queries from DNS probes. For the service to be considered available at a particular moment, at least, two of the delegated name servers registered in the DNS must have successful results from “DNS tests” to each of their public-DNS registered “IP addresses” to which the name server resolves. If 51% or more of the DNS testing probes see the service as unavailable during a given time, the DNS service will be considered unavailable. I'm not 100% sure of the part in bold: does "public" refer to the DNS or to the IP addesses? It looks like there's a mistake and that the hyphen should have been after "DNS". So basically does it mean "the public IP addresses registered in the DNS"?

    Read the article

  • Unit testing newbie team needs to unit test

    - by Walter
    I'm working with a new team that has historically not done ANY unit testing. My goal is for the team to eventually employ TDD (Test Driven Development) as their natural process. But since TDD is such a radical mind shift for a non-unit testing team I thought I would just start off with writing unit tests after coding. Has anyone been in a similar situation? What's an effective way to get a team to be comfortable with TDD when they've not done any unit testing? Does it make sense to do this in a couple of steps? Or should we dive right in and face all the growing pains at once?? EDIT Just for clarification, there is no one on the team (other than myself) who has ANY unit testing exposure/experience. And we are planning on using the unit testing functionality built into Visual Studio.

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Ten Things I Wish I’d Known When I Started Using tSQLt and SQL Test

    The open-source Unit Test framework tSQLt is a great way of writing unit tests in the same language as the one being tested. In retrospect, after using tSQLt for a while, what are the 'gotchas'; those things that you'd have been better off knowing about before you get started? David Green lists a few tips he wished he'd read beforehand. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Sam Abraham To Speak about MVC & MVVM at InterClick on May 19th

    - by Sam Abraham
    My next speaking engagement will be taking place at InterClick in Boca Raton, FL on Wednesday May 19th 2010.  Here is a quick abstract of what I will be blabbing about: MVC & MVVM are two of many buzzwords under the Architecture Spotlight as means of achieving true separation of concerns between data, business logic and UI layers. In this session, we will be discussing the basic concepts of Microsoft MVC and demonstrating the ease of creating a new MVC project and related Unit Tests within VS2010. We will then move to introduce MVVM as a design paradigm and incorporating that into an MS MVC application structure. Next, we will take a look at MVVM in the context of a sample Silverlight application. Throughout our talk we will be demonstrating various features of the latest and greatest VS2010. You can get more information about the event and the speaker, as well as register to attend at this link: http://sherstaff.com/EventSignUp.aspx?EventID=777 Look forward to seeing you all there.

    Read the article

  • Cannot update grub with paramters on live USB

    - by Nanne
    I have booted from a live USB ("Try Ubuntu"), that also has a persistent option set (I used LiLi to create one) to do some tests for this pcie hotplug issue I'm having. I'm trying to test some boot paramaters (like in this question) by doing this sudo nano /etc/default/grub sudo update-grub The problem is that that last command gives me this: /usr/sbin/grub-probe: error: failed to get canonical path of /cow. It looks like /cow is the file-system that is mounted on /, according to: :~# df Filesystem 1K-blocks Used Available Use% Mounted on /cow 4056896 2840204 1007284 74% / udev 1525912 4 1525908 1% /dev tmpfs 613768 844 612924 1% /run .... Is there a way for me to run update-grub?

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • TestDriven.NET - Free unit test tool for Visual Studio

    - by Guilherme Cardoso
    Developers that use unit testing are familiar with Resharper and his plugin for Unit Testing. For those that like me, don't have a great pc hardware (Pentium 4, 3ghz, 1GB ram) the Resharper can be really slow, and affect the performance of pc. That's why i use TestDriven.NET TestDriven.NET is a freeware license tool (there are others licenses for this product) that gives us the possibility to run unit tests with this plugin, that's integrated with Visual Studio. You can check some screenshots here: http://www.testdriven.net/Screenshots.aspx It's compatible with: NUnit, MbUnit, MSTest, NCover, Reflector, TypeMock, dotTrace and MSBee. More information and free download here: http://www.testdriven.net

    Read the article

  • PHP TestFest 2010 - Time to Get Involved

    - by christopher.jones
    Following a great 2009, the PHP community is organizing a repeat TestFest for 2010. São Paulo, Brazil kicked off the season on May 29th and their results are already up on the results page. The TestFest 2010 wiki page contains all the information about participating inTestFest 2010, including some nice little scripts for building PHP on various platforms. There is a loose structure to the TestFest: user groups coordinate local events, and of course individuals are welcome to contribute tests. The PHP QA mail list is a good place to ask questions (subscribe here).

    Read the article

  • Can my ikmnet test results say something about career choice I should take?

    - by Nicke
    I took 2 tests via ikmnet and scored 70 % on SQL and 65 % on Java. While not bad, it can be improved. The subskills I need to improve according to the test are interfaces and inheritance, compilation and deployment, flow control, The java.lang package and "Java Program Construction" and these topics seems rather broad to me. Rather than just learning by programming, could you advice me to take a certification, follow a course or otherwise improve my skills? By the way, I enjoy python more than Java so should I market myself more of a python programmer or even a role that some companies search for which seems like a system developer with more technical writing where the title is system analysts (evaluating systems in cooperation with management rather than programming.) Thank you for any comment and/or answer.

    Read the article

  • How was your experience working as a game tester?

    - by MrDatabase
    I'm currently an independent game developer. I'm open to the idea of working on a team in the game industry. I'm under the impression that being a "game tester" is a relatively easy way to get a job... however that job may be somewhat undesirable. So how was your experience working as a tester in the game industry? Some interesting experiences could include: Did the game tester position lead to other more desirable positions? How were the relationships between testers and developers? Did you write any code? (test "frameworks", unit tests etc) If bugs made it into production was any (potentially unfair) blame put on the testers?

    Read the article

  • Survey: how do you unit test your T-SQL?

    - by Alexander Kuznetsov
    How do you unit test your T-SQL? Which libraries/tools do you use? What percentage of your code is covered by unit tests and how do you measure it? Do you think the time and effort which you invested in your unit testing harness has paid off or not? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • La RC2 de ASP.NET MVC3 disponible : encore plus performante, elle est compatible avec la beta du SP 1 de Visual Studio 2010

    La RC2 de ASP.NET MVC3 disponible Encore plus performante, elle est compatible avec la beta du SP 1 de Visual Studio 2010 Mise à jour du 13/12/10 Microsoft, par le billet de son vice-président de la division de développement Scott Guthrie, vient d'annoncer la sortie de la Release Candidate 2 d'ASP.NET MVC 3. Au menu de cette nouvelle version : La correction de plusieurs bugs et l'optimisation des performances. Les tests de performance sur cette version, selon Guthrie, permettent de constater qu'ASP.NET MVC 3 est nettement plus rapide que la version 2 et que les applications ASP.net MVC existantes, après une mise à...

    Read the article

  • How to use Xvfb in Ubuntu 14.04 with/without RandR?

    - by Itchy
    I try to run Unit-Tests with Selenium running Firefox on my Ubuntu 14.04 Server. And I'm using Xvfb as described in this blog to simulate a virtual display to show Firefox in. But Xvfg somehow doesn't load/work with RandR. Because whenever I try this: sudo Xvfb :10 -ac & export DISPLAY=:10 firefox I get an Xlib: extension "RandR" missing on display ":10"-Error. I've also tried sudo Xvfb :10 -ac +extension RANDR, sudo Xvfb :10 -ac -extension RANDR and beacuse it supplies with "xrandr" also apt-get install x11-xserver-utils. And my setup is a plain empty Ubuntu 14.04 Server with apt-get install xvfb firefox. Can anyone please help me run Xvfb with or without RandR?

    Read the article

  • VS 2010 SP1 BETA – App.config XML Transformation Fix

    - by João Angelo
    The current version for App.config XML tranformations as described in a previous post does not support the SP1 BETA version of Visual Studio. I did some quick tests and decided to provide a different version with a compatibility fix for those already experimenting with the beta release of the service pack. This is a quick workaround to the build errors I found when using the transformations in SP1 beta and is pretty much untested since I’ll wait for the final release of the service pack to take a closer look to any possible problems. But for now, those that already installed SP1 beta can use the following transformations: VS 2010 SP1 BETA – App.config XML Transformation And those with the RTM release of Visual Studio can continue to use the original version of the transformations available from: VS 2010 RTM – App.config XML Transformation

    Read the article

  • NTFS Issues in Windows 7 and 2008 R2 - 'Is it a Bug?'

    - by renewieldraaijer
    I have been using the various versions of the Microsoft Windows product line since NT4 and I really thought I knew the ins and outs about the NTFS filesystem by now. There were always a few rules of thumb to understand what happens if you move data around. These rules were: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. The same goes for moving data between disk partitions. Only when you move data within the same partition, the permissions are kept."  Recently I was asked to assist in troubleshooting some NTFS related issues. This forced me to have another good look at this theory. To my surprise I found out that this theory does not completely stand anymore. Apparently some things have changed since the release of Windows Vista / Windows 2008. Since the release of these Operating Systems, a move within the same disk partition results in the data inheriting the permissions of the location it is being copied into. A major change in the NTFS filesystem you would think!  Not quite! The above only counts when the move operation is being performed by using Windows Explorer. A move by using the 'move' command from within a cmd prompt for example, retains the NTFS permissions, just like before in Windows XP and older systems. Conclusion: The Windows Explorer is responsible for changing the ACL's of the moved data. This is a remarkable change, but if you follow this theory, the resulting ACL after a move operation is still predictable.  We could say that since Windows Vista and Windows 2008, a new rule set applies: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. Same goes for moving data between disk partitions and within disk partitions. Only when you move data within the same partition by using something else than the Windows Explorer, the permissions are kept." The above behavior should be unchanged in Windows 7 / Windows 2008 R2, compared to Windows Vista / 2008. But somehow the NTFS permissions are not so predictable in Windows 7 and Windows 2008 R2. Moving data within the same disk partition the one time results in the permissions being kept and the next time results in inherited permissions from the destination location. I will try to demonstrate this in a few examples: Example 1 (Incorrect behavior): Consider two folders, 'Folder A' and 'Folder B' with the following permissions configured.                    Now we create the test file 'test file 1.txt' in 'Folder A' and afterwards move this file to 'Folder B' using Windows Explorer.                       According to the new theory, the file should inherit the permissions of 'Folder B' and therefore 'Group B' should appear in the ACL of 'test file 1.txt'. In the screenshot below the resulting permissions are displayed. The permissions from the originating location are kept, while the permissions of 'Folder B' should be inherited.                   Example 2 (Correct behavior): Again, consider the same two folders. This time we make a small modification to the ACL of 'Folder A'. We add 'Group C' to the ACL and again we create a file in 'Folder A' which we name 'test file 2.txt'.                    Next, we move 'test file 2.txt' to 'Folder B'.                       Again, we check the permissions of 'test file 2.txt' at the target location. We can now see that the permissions are inherited. This is what should be happening, and can be considered 'correct behavior' for Windows Vista / 2008 / 7 / 2008 R2. It remains uncertain why this behavior is so inconsistent. At this time, this is under investigation with Microsoft Support. The investigation has been going for the last two weeks and it is beginning to look like there is no rational reason for this, other than a bug in the Windows Explorer in Windows 7 and 2008 R2. As soon as there is any certainty on this, I will note it here in this blog.                   The examples above are harmless tests, by using my own laptop. If you would create the same set of folders and groups, and configure exactly the same permissions, you will see exactly the same behavior. Be sure to use Windows 7 or Windows 2008 R2.   Initially the problem arose at a customer site where move operations on data on the fileserver by users would result in unpredictable results. This resulted in the wrong set of people having àccess permissions on data that they should not have permissions to. Off course this is something we want to prevent at all costs.   I have also done several tests with move operations by using the move command in a cmd prompt. This way the behavior is always consistent. The inconsistent behavior is only exposed when using the Windows Explorer to initiate the move operation, and only when using Windows 7 or Windows 2008 R2 systems. It is evident that this behavior changes when the ACL of a folder has been changed, for example by adding an extra entry. The reason for this remains uncertain though. To be continued…. A dutch version of this post can be found at: http://blogs.platani.nl/?p=612

    Read the article

  • European Interoperability Framework - a new beginning?

    - by trond-arne.undheim
    The most controversial document in the history of the European Commission's IT policy is out. EIF is here, wrapped in the Communication "Towards interoperability for European public services", and including the new feature European Interoperability Strategy (EIS), arguably a higher strategic take on the same topic. Leaving EIS aside for a moment, the EIF controversy has been around IPR, defining open standards and about the proper terminology around standardization deliverables. Today, as the document finally emerges, what is the verdict? First of all, to be fair to those among you who do not spend your lives in the intricate labyrinths of Commission IT policy documents on interoperability, let's define what we are talking about. According to the Communication: "An interoperability framework is an agreed approach to interoperability for organisations that want to collaborate to provide joint delivery of public services. Within its scope of applicability, it specifies common elements such as vocabulary, concepts, principles, policies, guidelines, recommendations, standards, specifications and practices." The Good - EIF reconfirms that "The Digital Agenda can only take off if interoperability based on standards and open platforms is ensured" and also confirms that "The positive effect of open specifications is also demonstrated by the Internet ecosystem." - EIF takes a productive and pragmatic stance on openness: "In the context of the EIF, openness is the willingness of persons, organisations or other members of a community of interest to share knowledge and stimulate debate within that community, the ultimate goal being to advance knowledge and the use of this knowledge to solve problems" (p.11). "If the openness principle is applied in full: - All stakeholders have the same possibility of contributing to the development of the specification and public review is part of the decision-making process; - The specification is available for everybody to study; - Intellectual property rights related to the specification are licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software" (p. 26). - EIF is a formal Commission document. The former EIF 1.0 was a semi-formal deliverable from the PEGSCO, a working group of Member State representatives. - EIF tackles interoperability head-on and takes a clear stance: "Recommendation 22. When establishing European public services, public administrations should prefer open specifications, taking due account of the coverage of functional needs, maturity and market support." - The Commission will continue to support the National Interoperability Framework Observatory (NIFO), reconfirming the importance of coordinating such approaches across borders. - The Commission will align its internal interoperability strategy with the EIS through the eCommission initiative. - One cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. The EIF seems to have picked up on this fact: What does the EIF says about the relation between open specifications and open source software? The EIF introduces, as one of the characteristics of an open specification, the requirement that IPRs related to the specification have to be licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software. In this way, companies working under various business models can compete on an equal footing when providing solutions to public administrations while administrations that implement the standard in their own software (software that they own) can share such software with others under an open source licence if they so decide. - EIF is now among the center pieces of the Digital Agenda (even though this demands extensive inter-agency coordination in the Commission): "The EIS and the EIF will be maintained under the ISA Programme and kept in line with the results of other relevant Digital Agenda actions on interoperability and standards such as the ones on the reform of rules on implementation of ICT standards in Europe to allow use of certain ICT fora and consortia standards, on issuing guidelines on essential intellectual property rights and licensing conditions in standard-setting, including for ex-ante disclosure, and on providing guidance on the link between ICT standardisation and public procurement to help public authorities to use standards to promote efficiency and reduce lock-in.(Communication, p.7)" All in all, quite a few good things have happened to the document in the two years it has been on the shelf or was being re-written, depending on your perspective, in any case, awaiting the storms to calm. The Bad - While a certain pragmatism is required, and governments cannot migrate to full openness overnight, EIF gives a bit too much room for governments not to apply the openness principle in full. Plenty of reasons are given, which should maybe have been put as challenges to be overcome: "However, public administrations may decide to use less open specifications, if open specifications do not exist or do not meet functional interoperability needs. In all cases, specifications should be mature and sufficiently supported by the market, except if used in the context of creating innovative solutions". - EIF does not use the internationally established terminology: open standards. Rather, the EIF introduces the notion of "formalised specification". How do "formalised specifications" relate to "standards"? According to the FAQ provided: The word "standard" has a specific meaning in Europe as defined by Directive 98/34/EC. Only technical specifications approved by a recognised standardisation body can be called a standard. Many ICT systems rely on the use of specifications developed by other organisations such as a forum or consortium. The EIF introduces the notion of "formalised specification", which is either a standard pursuant to Directive 98/34/EC or a specification established by ICT fora and consortia. The term "open specification" used in the EIF, on the one hand, avoids terminological confusion with the Directive and, on the other, states the main features that comply with the basic principle of openness laid down in the EIF for European Public Services. Well, this may be somewhat true, but in reality, Europe is 30 year behind in terminology. Unless the European Standardization Reform gets completed in the next few months, most Member States will likely conclude that they will go on referencing and using standards beyond those created by the three European endorsed monopolists of standardization, CEN, CENELEC and ETSI. Who can afford to begin following the strict Brussels rules for what they can call open standards when, in reality, standards stemming from global standardization organizations, so-called fora/consortia, dominate in the IT industry. What exactly is EIF saying? Does it encourage Member States to go on using non-ESO standards as long as they call it something else? I guess I am all for it, although it is a bit cumbersome, no? Why was there so much interest around the EIF? The FAQ attempts to explain: Some Member States have begun to adopt policies to achieve interoperability for their public services. These actions have had a significant impact on the ecosystem built around the provision of such services, e.g. providers of ICT goods and services, standardisation bodies, industry fora and consortia, etc... The Commission identified a clear need for action at European level to ensure that actions by individual Member States would not create new electronic barriers that would hinder the development of interoperable European public services. As a result, all stakeholders involved in the delivery of electronic public services in Europe have expressed their opinions on how to increase interoperability for public services provided by the different public administrations in Europe. Well, it does not take two years to read 50 consultation documents, and the EU Standardization Reform is not yet completed, so, more pragmatically, you finally had to release the document. Ok, let's leave some of that aside because the document is out and some people are happy (and others definitely not). The Verdict Considering the controversy, the delays, the lobbying, and the interests at stake both in the EU, in Member States and among vendors large and small, this document is pretty impressive. As with a good wine that has not yet come to full maturity, let's say that it seems to be coming in in the 85-88/100 range, but only a more fine-grained analysis, enjoyment in good company, and ultimately, implementation, will tell. The European Commission has today adopted a significant interoperability initiative to encourage public administrations across the EU to maximise the social and economic potential of information and communication technologies. Today, we should rally around this achievement. Tomorrow, let's sit down and figure out what it means for the future.

    Read the article

  • UI Testing with Visual Studio 2010 Feature Pack 2

    - by Seth P.
    One of the most intriguing items in the recently released Visual Studio 2010 Feature Pack 2 is the ability to create and edit UI tests in Silverlight. Here is an example of a coded UI test. I haven't had much time to use it yet, but for people who have, I am curious as to what your thoughts are. Is this something you have found to be particularly useful? I would like to be able to automate a significant amount of regression testing that we are currently performing manually. In your experience, has it made a major impact on the resources that you would normally have to dedicate to testing? Thanks, Seth

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • Les services de cloud et d'hébergement IBM sont désormais certifiés SAP, pour plus de stabilité et de sécurité

    Les services de cloud et d'hébergement IBM sont désormais certifiés SAP, pour plus de stabilité et de sécurité "Les clients cherchant à déployer des applications SAP dans le cloud peuvent s'appuyer sur IBM pour les aider à manager et maintenir les exigences de leurs solutions dans un environnement cloud sûr et sécurisé ; qui permet des services flexibles et des coûts de fonctionnement réduits", a fièrement déclaré Jim Comfort, vice-Président de la division Offering Magagement d'IBM. En effet, après de longs tests très poussés, les infrastructures, processus et équipes techniques du géant de l'informatique ont été gratifiés de la certification SAP tant désirée. Ce sont les installations de cloud computing et...

    Read the article

  • Don't Use Static? [closed]

    - by Joshiatto
    Possible Duplicate: Is static universally “evil” for unit testing and if so why does resharper recommend it? Heavy use of static methods in a Java EE web application? I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use "static". You can't write automated tests with static classes and methods. "Static" is to be avoided." I checked and fully 1/4 of my classes are marked "static". I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >