Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 171/479 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Lots of goodies

    - by wcoekaer
    We just issued a press release with a number of very good updates for everyone There are a few things of importance : 1) As of right now, Oracle Linux 6 with the Unbreakable Kernel is certified with a number of Oracle products such as Oracle Database 11gR2 and Oracle Fusion Middleware. The certification pages in the Oracle Support portal will be updated with the latest certification status for the various products. As always we have gone through a long period of very comprehensive testing and validation to ensure that the whole stack works really well together, with very large database workloads, middleware application workloads etc. 2) Standard certification efforts for Oracle Linux 6 with the Red Hat Compatible Kernel are in progress and we expect that to be completed in the next few months. Because of the compatibility between OL6 and RHEL6 we can then also state certification for RHEL6. 3) Oracle Linux binaries (and of course source code) have been free for download -and- use (including production, not just trial periods) since day one. You can freely redistribute the binaries, unlike many other Linux vendors where you need to pay a support subscription to even get access to the binaries. We offered both the base distribution release DVDs (OL4, OL5, OL6) and the update releases, such as 5.1, 5.2 etc. this way. Today, in this announcement, we also started to make available the bugfix and security updates released in between these update releases. So the errata streams (both binary and source code) for OL4, 5 and 6 are now free for download and use from http://public-yum.oracle.com. This includes uek and uek2. The nice thing is, if you want a complete up to date system without support, use this, if you then need support, get a support subscription. Simple, convenient, effective. We have great SLA's in producing our update streams, consistency in release timing and testing of all the components. Have at it!

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • 7??OTN?????????????&????

    - by OTN-J Master
    7??OTN?????????????&?????????????9?22?~26???????????????????Oracle OpenWorld???JavaOne?????????(????)????7?19????????????????????????????????????????????? 7??????&?????? ?????? ?6/27?????2? iOS??????? Biz????in ?? 7?5?(?)13:30 ~ 17:30 ?????? ??????(??)??????????????iPhone, iPad????????????????????????????????????iOS???????????????????????iOS????(iPhone, iPad??)????????????????????????????????????????????????????????????????? ??????????????2013 Okinawa 7?6?(?)10:00~17:00 ??????????????????????????????????????????Java?? (14?00?~)6 ???????????????????? Java ??????????? Java EE 7 ???????Java?????????????????????????????????????? Java ?????????Tomcat ?????????????????????? Java EE ????????GlassFish Server Open Source Edition????????????????? ?????!??????????????????????? 7?9?(?)13:00 ~ 17:00 ?????? ??????(??)??????????????????????????????????????????????????????????????????????????????(??)??????????????????Oracle GoldenGate????????????????????????????????????????????????????????????????????? ????! ???????? ?106?-???????????????Export/Import 7?10?(?)18:30 ~20:00 ?????? ????????????(??)18?????????????????????! ???????????106????Export/Import ??????????SQL*Loader??????????????????????????? ??????Oracle Database Appliance??? 7?16?(?)15:30 ~ 17:15 ?????? ??????(??)??????????????????????????????????·?????Oracle Database Appliance???????????????????????????????????????????????? ????~!!??????? ?7? ?Oracle Master Bronze Oracle Database 11g ????????? 7?17?(?)18:30~20:00  ???????? ???? ??????? (??)???23???????????????????????????????? ?????????????????ORACLE MASTER Bronze Oracle Database 11g????????????????????????????????????????????????????ORACLE MASTER Bronze???????? ??????????????????? ?????! ?????????/???????????? 7?19?(?)10:00 ~ 17:30 ?????? ??????(??)?Oracle Application Testing Suite??Web?????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Application Testing Suite????????????????????????????????????????????????????? Oracle Optimized Data Center Summit7?25?(?)13:00 ~ 17:30 ?????????7?26?(?)12:30 ~ 16:30 ??????????IT???????????????????????????????????????????????????????????????????????????????????IT????????????????????????????????·???????????????????????????????????????????????????????????????

    Read the article

  • Append/Add Child Div (jQuery) for Messaging System

    - by Lord Varlin
    So, I'm having difficulty trying to append/add a child div to the parent div, and thought someone out here may know the best way for me to go about this. I have attached my code (without PHP right now, just hard coding text and stuff). But here is what I am trying to do: When a message is posted, you hit the "Reply Button" and a new div will appear underneath containing the reply form. Right now, here are the issues I know about and can't get around: The DIV is a class, so when I use jQuery to try to target the DIV it targets everything since it's no unique. The Reply Button is also a class, so it's not unique. Here is a video of it in action: http://tinypic.com/r/2luxwnr/7 HTML <body> <div id="content-container"> <div id="message-viewer"> <div class="roar"> <div class="roaractionpanel"> <div id="msg-business2"></div> <div class="roartime"><p class="roartime-text">3:26PM</p></div> </div> <div class="roarcontent"> <button type="submit" class="btn-reply"></button> <h5>Test Post</h5><br> <h6>Lord Varlin</h6><br> <h2>Test post... let's see.</h2> </div> </div> <div class="newreply"> <div class="newreplycontent"> <h1>This is where the fields for a new reply will go.</h1> </div> </div> <div class="roar"> <div class="roaractionpanel"> <div id="msg-business2"></div> <div class="roartime"><p class="roartime-text">3:26PM</p></div> </div> <div class="roarcontent"> <button type="submit" class="btn-reply"></button> <h5>Testing another</h5><br> <h6>Lord Varlin</h6><br> <h2>Hmm dee dumm...</h2> </div> </div> <div class="roarreply"> <div class="roarreply-marker"> <p class="roarreplytime-text">June 26th @ 4:42AM</p> </div> <div class="roarreplycontent"> <h9>Testing a reply. Hmmmm.</h9><br> <h8>Lord Varlin</h8> </div> </div> <div class="newreply"> <div class="newreplycontent"> <h1>This is where the fields for a new reply will go.</h1> </div> </div> <div class="roar"> <div class="roaractionpanel"> <div id="msg-business2"></div> <div class="roartime"><p class="roartime- text">3:26PM</p></div> </div> <div class="roarcontent"> <button type="submit" class="btn-reply"></button> <h5>Testing another</h5><br> <h6>Lord Varlin</h6><br> <h2>jQuery, work with me please.</h2> </div> </div> <div class="roarreply"> <div class="roarreply-marker"> <p class="roarreplytime-text">June 26th @ 4:42AM</p> </div> <div class="roarreplycontent"> <h9>Testing a reply. Hmmmm.</h9><br> <h8>Lord Varlin</h8> </div> </div> <div class="roarreply"> <div class="roarreply-marker"> <p class="roarreplytime-text">June 26th @ 4:42AM</p> </div> <div class="roarreplycontent"> <h9>Testing a reply. Hmmmm.</h9><br> <h8>Lord Varlin</h8> </div> </div> <div class="newreply"> <div class="newreplycontent"> <h1>This is where the fields for a new reply will go.</h1> </div> </div> </div> </div> </body> JQUERY $(".btn-reply").click(function() { $(".newreply").toggleClass("show"); return false; }); So, I see the flaws, but I just can't wrap my head around how to pull this off! Any guidance would be awesome! :)

    Read the article

  • Getting java.lang.ClassNotFoundException: javax.servlet.ServletContext in junit

    - by coder
    I'm using spring mvc in my application and I'm writing junit test cases for a DAO. But when I run the test, I get an error java.lang.ClassNotFoundException: javax.servlet.ServletContext. In the stacktrace, I see that this error is caused during getApplicationContext. In my applicationContext, I havent defined any servlet. Servlet mapping is done only in web.xml so I dont understand why I'm getting this error. Here is my applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd" xmlns:tx="http://www.springframework.org/schema/tx"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="com.mysql.jdbc.Driver"/> <property name="jdbcUrl" value="jdbc:mysql://localhost:3306/testdb"/> <property name="user" value="username"/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.connection.driver_class">com.mysql.jdbc.Driver</prop> <prop key="hibernate.connection.url">jdbc:mysql://localhost:3306/myWorld_test</prop> <prop key="hibernate.connection.username">username</prop> </props> </property> <property name="packagesToScan"> <list> <value>com.myprojects.pojos</value> </list> </property> </bean> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory" ref="sessionFactory"/> </bean> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <context:component-scan base-package="com.myprojects"/> <context:annotation-config/> <mvc:annotation-driven/> </beans> Here is the stacktrace: java.lang.NoClassDefFoundError: javax/servlet/ServletContext at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2521) at java.lang.Class.getDeclaredMethods(Class.java:1845) at org.springframework.core.type.StandardAnnotationMetadata.hasAnnotatedMethods(StandardAnnotationMetadata.java:161) at org.springframework.context.annotation.ConfigurationClassUtils.isLiteConfigurationCandidate(ConfigurationClassUtils.java:106) at org.springframework.context.annotation.ConfigurationClassUtils.checkConfigurationClassCandidate(ConfigurationClassUtils.java:88) at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:253) at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:223) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:630) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:461) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:120) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:100) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:248) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91) at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:122) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:312) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:211) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:288) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:284) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletContext at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 62 more Test class: import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) public class UserServiceTest { @Autowired private UserService service; public UserServiceTest() { } @BeforeClass public static void setUpClass() { } @AfterClass public static void tearDownClass() { } @Before public void setUp() { } @After public void tearDown() { } } Even before writing any test method, I got this error. Any idea why this error?

    Read the article

  • [C++] A minimalistic smart array (container) class template

    - by legends2k
    I've written a (array) container class template (lets call it smart array) for using it in the BREW platform (which doesn't allow many C++ constructs like STD library, exceptions, etc. It has a very minimal C++ runtime support); while writing this my friend said that something like this already exists in Boost called MultiArray, I tried it but the ARM compiler (RVCT) cries with 100s of errors. I've not seen Boost.MultiArray's source, I've just started learning template only lately; template meta programming interests me a lot, although am not sure if this is strictly one, which can be categorised thus. So I want all my fellow C++ aficionados to review it ~ point out flaws, potential bugs, suggestions, optimisations, etc.; somthing like "you've not written your own Big Three which might lead to...". Possibly any criticism that'll help me improve this class and thereby my C++ skills. smart_array.h #include <vector> using std::vector; template <typename T, size_t N> class smart_array { vector < smart_array<T, N - 1> > vec; public: explicit smart_array(vector <size_t> &dimensions) { assert(N == dimensions.size()); vector <size_t>::iterator it = ++dimensions.begin(); vector <size_t> dimensions_remaining(it, dimensions.end()); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimensions[0], temp_smart_array); } explicit smart_array(size_t dimension_1 = 1, ...) { static_assert(N > 0, "Error: smart_array expects 1 or more dimension(s)"); assert(dimension_1 > 1); va_list dim_list; vector <size_t> dimensions_remaining(N - 1); va_start(dim_list, dimension_1); for(size_t i = 0; i < N - 1; ++i) { size_t dimension_n = va_arg(dim_list, size_t); assert(dimension_n > 0); dimensions_remaining[i] = dimension_n; } va_end(dim_list); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimension_1, temp_smart_array); } smart_array<T, N - 1>& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() const { return vec.size(); } }; template<typename T> class smart_array<T, 1> { vector <T> vec; public: explicit smart_array(vector <size_t> &dimension) : vec(dimension[0]) { assert(dimension[0] > 0); } explicit smart_array(size_t dimension_1 = 1) : vec(dimension_1) { assert(dimension_1 > 0); } T& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() { return vec.size(); } }; Sample Usage: #include <iostream> using std::cout; using std::endl; int main() { // testing 1 dimension smart_array <int, 1> x(3); x[0] = 0, x[1] = 1, x[2] = 2; cout << "x.length(): " << x.length() << endl; // testing 2 dimensions smart_array <float, 2> y(2, 3); y[0][0] = y[0][1] = y[0][2] = 0; y[1][0] = y[1][1] = y[1][2] = 1; cout << "y.length(): " << y.length() << endl; cout << "y[0].length(): " << y[0].length() << endl; // testing 3 dimensions smart_array <char, 3> z(2, 4, 5); cout << "z.length(): " << z.length() << endl; cout << "z[0].length(): " << z[0].length() << endl; cout << "z[0][0].length(): " << z[0][0].length() << endl; z[0][0][4] = 'c'; cout << z[0][0][4] << endl; // testing 4 dimensions smart_array <bool, 4> r(2, 3, 4, 5); cout << "z.length(): " << r.length() << endl; cout << "z[0].length(): " << r[0].length() << endl; cout << "z[0][0].length(): " << r[0][0].length() << endl; cout << "z[0][0][0].length(): " << r[0][0][0].length() << endl; // testing copy constructor smart_array <float, 2> copy_y(y); cout << "copy_y.length(): " << copy_y.length() << endl; cout << "copy_x[0].length(): " << copy_y[0].length() << endl; cout << copy_y[0][0] << "\t" << copy_y[1][0] << "\t" << copy_y[0][1] << "\t" << copy_y[1][1] << "\t" << copy_y[0][2] << "\t" << copy_y[1][2] << endl; return 0; }

    Read the article

  • A minimalistic smart array (container) class template

    - by legends2k
    I've written a (array) container class template (lets call it smart array) for using it in the BREW platform (which doesn't allow many C++ constructs like STD library, exceptions, etc. It has a very minimal C++ runtime support); while writing this my friend said that something like this already exists in Boost called MultiArray, I tried it but the ARM compiler (RVCT) cries with 100s of errors. I've not seen Boost.MultiArray's source, I've started learning templates only lately; template meta programming interests me a lot, although am not sure if this is strictly one that can be categorized thus. So I want all my fellow C++ aficionados to review it ~ point out flaws, potential bugs, suggestions, optimizations, etc.; something like "you've not written your own Big Three which might lead to...". Possibly any criticism that will help me improve this class and thereby my C++ skills. Edit: I've used std::vector since it's easily understood, later it will be replaced by a custom written vector class template made to work in the BREW platform. Also C++0x related syntax like static_assert will also be removed in the final code. smart_array.h #include <vector> #include <cassert> #include <cstdarg> using std::vector; template <typename T, size_t N> class smart_array { vector < smart_array<T, N - 1> > vec; public: explicit smart_array(vector <size_t> &dimensions) { assert(N == dimensions.size()); vector <size_t>::iterator it = ++dimensions.begin(); vector <size_t> dimensions_remaining(it, dimensions.end()); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimensions[0], temp_smart_array); } explicit smart_array(size_t dimension_1 = 1, ...) { static_assert(N > 0, "Error: smart_array expects 1 or more dimension(s)"); assert(dimension_1 > 1); va_list dim_list; vector <size_t> dimensions_remaining(N - 1); va_start(dim_list, dimension_1); for(size_t i = 0; i < N - 1; ++i) { size_t dimension_n = va_arg(dim_list, size_t); assert(dimension_n > 0); dimensions_remaining[i] = dimension_n; } va_end(dim_list); smart_array <T, N - 1> temp_smart_array(dimensions_remaining); vec.assign(dimension_1, temp_smart_array); } smart_array<T, N - 1>& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() const { return vec.size(); } }; template<typename T> class smart_array<T, 1> { vector <T> vec; public: explicit smart_array(vector <size_t> &dimension) : vec(dimension[0]) { assert(dimension[0] > 0); } explicit smart_array(size_t dimension_1 = 1) : vec(dimension_1) { assert(dimension_1 > 0); } T& operator[](size_t index) { assert(index < vec.size() && index >= 0); return vec[index]; } size_t length() { return vec.size(); } }; Sample Usage: #include "smart_array.h" #include <iostream> using std::cout; using std::endl; int main() { // testing 1 dimension smart_array <int, 1> x(3); x[0] = 0, x[1] = 1, x[2] = 2; cout << "x.length(): " << x.length() << endl; // testing 2 dimensions smart_array <float, 2> y(2, 3); y[0][0] = y[0][1] = y[0][2] = 0; y[1][0] = y[1][1] = y[1][2] = 1; cout << "y.length(): " << y.length() << endl; cout << "y[0].length(): " << y[0].length() << endl; // testing 3 dimensions smart_array <char, 3> z(2, 4, 5); cout << "z.length(): " << z.length() << endl; cout << "z[0].length(): " << z[0].length() << endl; cout << "z[0][0].length(): " << z[0][0].length() << endl; z[0][0][4] = 'c'; cout << z[0][0][4] << endl; // testing 4 dimensions smart_array <bool, 4> r(2, 3, 4, 5); cout << "z.length(): " << r.length() << endl; cout << "z[0].length(): " << r[0].length() << endl; cout << "z[0][0].length(): " << r[0][0].length() << endl; cout << "z[0][0][0].length(): " << r[0][0][0].length() << endl; // testing copy constructor smart_array <float, 2> copy_y(y); cout << "copy_y.length(): " << copy_y.length() << endl; cout << "copy_x[0].length(): " << copy_y[0].length() << endl; cout << copy_y[0][0] << "\t" << copy_y[1][0] << "\t" << copy_y[0][1] << "\t" << copy_y[1][1] << "\t" << copy_y[0][2] << "\t" << copy_y[1][2] << endl; return 0; }

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • jQuery Selector Tester and Cheat Sheet

    - by SGWellens
    I've always appreciated these tools: Expresso and XPath Builder. They make designing regular expressions and XPath selectors almost fun! Did I say fun? I meant less painful. Being able to paste/load text and then interactively play with the search criteria is infinitely better than the code/compile/run/test cycle. It's faster and you get a much better feel for how the expressions work. So, I decided to make my own interactive tool to test jQuery selectors:  jQuery Selector Tester.   Here's a sneak peek: Note: There are some existing tools you may like better: http://www.woods.iki.fi/interactive-jquery-tester.html http://www.w3schools.com/jquery/trysel.asp?filename=trysel_basic&jqsel=p.intro,%23choose My tool is different: It is one page. You can save it and run it locally without a Web Server. It shows the results as a list of iterated objects instead of highlighted html. A cheat sheet is on the same page as the tester which is handy. I couldn't upload an .htm or .html file to this site so I hosted it on my personal site here: jQuery Selector Tester. Design Highlights: To make the interactive search work, I added a hidden div to the page: <!--Hidden div holds DOM elements for jQuery to search--><div id="HiddenDiv" style="display: none"></div> When ready to search, the searchable html text is copied into the hidden div…this renders the DOM tree in the hidden div: // get the html to search, insert it to the hidden divvar Html = $("#TextAreaHTML").val();$("#HiddenDiv").html(Html); When doing a search, I modify the search pattern to look only in the HiddenDiv. To do that, I put a space between the patterns.  The space is the Ancestor operator (see the Cheat Sheet): // modify search string to only search in our// hidden div and do the searchvar SearchString = "#HiddenDiv " + SearchPattern;try{    var $FoundItems = $(SearchString);}   Big Fat Stinking Faux Pas: I was about to publish this article when I made a big mistake: I tested the tool with Mozilla FireFox. It blowed up…it blowed up real good. In the past I’ve only had to target IE so this was quite a revelation. When I started to learn JavaScript, I was disgusted to see all the browser dependent code. Who wants to spend their time testing against different browsers and versions of browsers? Adding a bunch of ‘if-else’ code is a tedious and thankless task. I avoided client code as much as I could. Then jQuery came along and all was good. It was browser independent and freed us from the tedium of worrying about version N of the Acme browser. Right? Wrong! I had used outerHTML to display the selected elements. The problem is Mozilla FireFox doesn’t implement outerHTML. I replaced this: // encode the html markupvar OuterHtml = $('<div/>').text(this.outerHTML).html(); With this: // encode the html markupvar Html = $('<div>').append(this).html();var OuterHtml = $('<div/>').text(Html).html(); Another problem was that Mozilla FireFox doesn’t implement srcElement. I replaced this: var Row = e.srcElement.parentNode;  With this: var Row = e.target.parentNode; Another problem was the indexing. The browsers have different ways of indexing. I replaced this: // this cell has the search pattern  var Cell = Row.childNodes[1];   // put the pattern in the search box and search                    $("#TextSearchPattern").val(Cell.innerText);  With this: // get the correct cell and the text in the cell// place the text in the seach box and serachvar Cell = $(Row).find("TD:nth-child(2)");var CellText = Cell.text();$("#TextSearchPattern").val(CellText);   So much for the myth of browser independence. Was I overly optimistic and gullible? I don’t think so. And when I get my millions from the deposed Nigerian prince I sent money to, you’ll see that having faith is not futile. Notes: My goal was to have a single standalone file. I tried to keep the features and CSS to a minimum–adding only enough to make it useful and visually pleasing. When testing, I often thought there was a problem with the jQuery selector. Invariable it was invalid html code. If your results aren't what you expect, don't assume it's the jQuery selector pattern: The html may be invalid. To help in development and testing, I added a double-click handler to the rows in the Cheat Sheet table. If you double-click a row, the search pattern is put in the search box, a search is performed and the page is scrolled so you can see the results. I left the test html and code in the page. If you are using a CDN (non-local) version of the jQuery libraray, the designer in Visual Studio becomes extremely slow.  That's why there are two version of the library in the header and one is commented out. For reference, here is the jQuery documentation on selectors: http://api.jquery.com/category/selectors/ Here is a much more comprehensive list of CSS selectors (which jQuery uses): http://www.w3.org/TR/CSS2/selector.html I hope someone finds this useful. Steve WellensCodeProject

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Spritesheet per pixel collision XNA

    - by Jixi
    So basically i'm using this: public bool IntersectPixels(Rectangle rectangleA, Color[] dataA,Rectangle rectangleB, Color[] dataB) { int top = Math.Max(rectangleA.Top, rectangleB.Top); int bottom = Math.Min(rectangleA.Bottom, rectangleB.Bottom); int left = Math.Max(rectangleA.Left, rectangleB.Left); int right = Math.Min(rectangleA.Right, rectangleB.Right); for (int y = top; y < bottom; y++) { for (int x = left; x < right; x++) { Color colorA = dataA[(x - rectangleA.Left) + (y - rectangleA.Top) * rectangleA.Width]; Color colorB = dataB[(x - rectangleB.Left) + (y - rectangleB.Top) * rectangleB.Width]; if (colorA.A != 0 && colorB.A != 0) { return true; } } } return false; } In order to detect collision, but i'm unable to figure out how to use it with animated sprites. This is my animation update method: public void AnimUpdate(GameTime gameTime) { if (!animPaused) { animTimer += (float)gameTime.ElapsedGameTime.TotalMilliseconds; if (animTimer > animInterval) { currentFrame++; animTimer = 0f; } if (currentFrame > endFrame || endFrame <= currentFrame || currentFrame < startFrame) { currentFrame = startFrame; } objRect = new Rectangle(currentFrame * TextureWidth, frameRow * TextureHeight, TextureWidth, TextureHeight); origin = new Vector2(objRect.Width / 2, objRect.Height / 2); } } Which works with multiple rows and columns. and how i call the intersect: public bool IntersectPixels(Obj me, Vector2 pos, Obj o) { Rectangle collisionRect = new Rectangle(me.objRect.X, me.objRect.Y, me.objRect.Width, me.objRect.Height); collisionRect.X += (int)pos.X; collisionRect.Y += (int)pos.Y; if (IntersectPixels(collisionRect, me.TextureData, o.objRect, o.TextureData)) { return true; } return false; } Now my guess is that i have to update the textureData everytime the frame changes, no? If so then i already tried it and miserably failed doing so :P Any hints, advices? If you need to see any more of my code just let me know and i'll update the question. Updated almost functional collisionRect: collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.Texture.Width / (int)((me.frameCount - 1) * me.TextureWidth), me.Texture.Height); What it does now is "move" the block up 50%, shouldn't be too hard to figure out. Update: Alright, so here's a functional collision rectangle(besides the height issue) collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.TextureWidth / (int)me.frameCount - 1, me.TextureHeight); Now the problem is that using breakpoints i found out that it's still not getting the correct color values of the animated sprite. So it detects properly but the color values are always: R:0 G:0 B:0 A:0 ??? disregard that, it's not true afterall =P For some reason now the collision area height is only 1 pixel..

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • My First Weeks at Red Gate

    - by Jess Nickson
    Hi, my name’s Jess and early September 2012 I started working at Red Gate as a Software Engineer down in The Agency (the Publishing team). This was a bit of a shock, as I didn’t think this team would have any developers! I admit, I was a little worried when it was mentioned that my role was going to be different from normal dev. roles within the company. However, as luck would have it, I was placed within a team that was responsible for the development and maintenance of Simple-Talk and SQL Server Central (SSC). I felt rather unprepared for this role. I hadn’t used many of the technologies involved and of those that I had, I hadn’t looked at them for quite a while. I was, nevertheless, quite excited about this turn of events. As I had predicted, the role has been quite challenging so far. I expected that I would struggle to get my head round the large codebase already in place, having never used anything so much as a fraction of the size of this before. However, I was perhaps a bit naive when it came to how quickly things would move. I was required to start learning/remembering a number of different languages and technologies within time frames I would never have tried to set myself previously. Having said that, my first week was pretty easy. It was filled with meetings that were designed to get the new starters up to speed with the different departments, ideals and rules within the company. I also attended some lightning talks being presented by other employees, which were pretty useful. These occur once a fortnight and normally consist of around four speakers. In my spare time, we set up the Simple-Talk codebase on my computer and I started exploring it and worked on my first feature – redirecting requests for URLs that used incorrect casing! It was also during this time that I was given my first introduction to test-driven development (TDD) with Michael via a code kata. Although I had heard of the general ideas behind TDD, I had definitely never tried it before. Indeed, I hadn’t really done any automated testing of code before, either. The session was therefore very useful and gave me insights as to some of the coding practices used in my team. Although I now understand the importance of TDD, it still seems odd in my head and I’ve yet to master how to sensibly step up the functionality of the code a bit at a time. The second week was both easier and more difficult than the first. I was given a new project to work on, meaning I was no longer using the codebase already in place. My job was to take some designs, a WordPress theme, and some initial content and build a page that allowed users of the site to read provided resources and give feedback. This feedback could include their thoughts about the resource, the topics covered and the page design itself. Although it didn’t sound the most challenging of projects when compared to fixing bugs in our current codebase, it nevertheless provided a few sneaky problems that had me stumped. I really enjoyed working on this project as it allowed me to play around with HTML, CSS and JavaScript; all things that I like working with but rarely have a chance to use. I completed the aims for the project on time and was happy with the final outcome – though it still needs a good designer to take a look at it! I am now into my third week at Red Gate and I have temporarily been pulled off the website from week 2. I am again back to figuring out the Simple-Talk codebase. Monday provided me with the chance to learn a bunch of new things: system level testing, Selenium and Python. I was set the challenge of testing a bug fix dealing with the search bars in Simple-Talk. The exercise was pretty fun, although Mike did have to point me in the right direction when I started making the tests a bit too complex. The rest of the week looks set to be focussed on pair programming with Mike as we work together on a new feature. I look forward to the challenges that still face me and hope that I will be able to get up to speed quickly. *fingers crossed*

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Incentivizing Work with Development Teams

    - by MarkPearl
    Recently I saw someone on twitter asking about incentives and if anyone had past experience with incentivizing work. I promised to respond with some of the experiences I have had in the past so here goes... **Disclaimer** - these are my experiences with incentives, generally in software development - in some other industries this may not be applicable – this is also my thinking at this point in time, with more experience my opinion may change. Incentivize at the level that you want people to group at If you are wanting to promote a team mentality, incentivize teams. If you want to promote an individual mentality, incentivize individuals. There is nothing worse than mixing this up. Some organizations put a lot of effort in establishing teams and team mentalities but reward individuals. This has a counter effect on the resources they have put towards establishing a team mentality. In the software projects that I work with we want promote cross functional teams that collaborate. Personally, if I was on a team and knew that there was an opportunity to work on a critical component of the system, and that by doing so I would get a bigger bonus, then I would be hesitant to include other people in solving that problem. Thus, I would hinder the teams efforts in being cross functional and reduce collaboration levels. Does that mean everyone in the team should get an even share of an incentive? In most situations I would say yes - even though this may feel counter-intuitive. I have heard arguments put forward that if “person x contributed more than person Y then they should be rewarded more” – This may sound controversial but I would rather treat people how would you like them to perform, not where they currently are at. To add to this approach, if someone is free loading, you bet your bottom dollar that the team is going to make this a lot more transparent if they feel that individual is going to be rewarded at the same level that everyone else is. Bad incentives promote destructive work If you are going to incentivize people, pick you incentives very carefully. I had an experience once with a sales person who was told they would get a bonus provided that they met an ordering target with a particular supplier. What did this person do? They sold everything at cost for the next month or so. They reached the goal, but the company didn't gain anything from it. It was a bad incentive. Expect the same with development teams, if you incentivize zero bug levels, you will get zero code committed to the solution. If you incentivize lines of code, you will get many many lines of bad code. Is there such a thing as a good incentives? Monetary wise, I am not sure there is. I would much rather encourage organizations to pay their people what they are worth upfront. I would also advise against paying money to teams as an incentive or even a bonus or reward for reaching a milestone. Rather have a breakaway for the team that promotes team building as a reward if they reach a milestone than pay them more money. I would also advise against making the incentive the reason for them to reach the milestone. If this becomes the norm it promotes people to begin to only do their job if there is an incentive at the end of the line. This is not a behaviour one wants to encourage. If the team or individual is in the right mind-set, they should not work any harder than they are right now with normal pay.

    Read the article

  • Collaborative Organizations build Organizational Culture

    “A Collaborative organization builds its culture based on the idea of the family or an athletic team.”(Hoefling, 2001) As I grew up, I participated in many different types of clubs, civic organizations, and sports teams.  Now looking back at the more successful undertakings, I can see three commonalities amongst them. They all shared a defined purpose or goal, defined functional roles, and a shared sense of responsibility to the group. Defined Purpose or Goal In order to unit people to work together, they must share a common goal or have a common purpose. An example of this would be the Lions Club International Foundation. There purpose is to help everyone to lead healthier and more productive lives, nurtures the potential of youth, promotes health, serves the elderly, empowers the disabled and helps victims of disasters. This organization holds localized meetings across the world and works in conjunction with other localized clubs within there organization along with other organizations to promote common goals. If there are no common goals for the group, then there is nothing that binds people to the group, and nothing will be done. Defined Functional Roles In order for an organization to work and function as a team, they must have defined roles and everyone must know how their roles are interdependent on each other. Lets shed light on this subject by looking at a football team’s offense.  Each player has an assigned role to play each time the ball is snapped. The offensive line blocks for the running back or quarterback, the quarterback passes the ball to the wide receiver or hands it off to the running back and the running back and wide receivers run with the ball towards the goal line. Each member of this team shares a common goal of scoring a touchdown, but if each team member does not fulfill their assigned roles the offences will collapse and the team will lose yards. This will provide a set back to the teams goal of scoring a touchdown because they potential are then farther away from the goal line.  In addition, if all the players do not know their roles and how they are part of a larger team then even larger yard losses can occur. Shared Sense of Personal Responsibility to the Group Shared responsibility comes with the shared common goals. Each person in the organization must do their part to promote the common shared goal or purpose based on their abilities. A prime example of this is a wrestling team competing in a match. Points are awarded to the team based on how many wins the team achieves in the meet and of that how many wins where won by decision or by pin. If a wrestler pins his opponent the teams will receive 2 points for the win, but if the wrestler wins by decision, then the team only gets one point for the win. So it is the responsibility of each person on the team to not get pinned if they are unable to win the match. If the team member gets pinned then the other team receives an additional point for the win. References: Hoefling, T. (2001). Working Virtually: Managing People for Successful Virtual Teams and Organizations. Sterling, VA: Stylus Publishing, LLC.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >