Search Results

Search found 22961 results on 919 pages for 'memory management'.

Page 750/919 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • I still think Twitter is dead &hellip; but

    - by Randy Walker
    Twitter finally hit the mainstream about 8 months ago, but I’ve been saying for a couple of years now, without a real way for the company to earn money, what’s the future fate of Twitter?  On the personal side, where is the real value for the users?  For the most part, Twitter has replaced most people’s IM (instant messaging), at least in the technology circles I run in.  It still has value for users as a communication tool.  But I see it more as a fad.  My prediction is over the next 6 months we’ll start seeing a usage drop (if we haven’t already started to see it). On the business side, how does Twitter make money?  It doesn’t.  If you use the text messaging capabilities, you see a few ads.  But most smart phone and PC users, won’t ever see them.  I still think Twitter has the best chance to make money by forcing the “collectors” to pay money.  You know what I mean by “collector”, those people that collect tons of followers or friends.  If Twitter caps the number of followers and makes you pay to have more, would you?  The normal twitter user doesn’t have that many followers, and this is where my title comes in … BUT The financial value for Twitter is really seen through businesses connecting with their customers.  I’ve seen 3 effective ways this has been accomplished. 1. Giving your customers a coupon or announcing a sale My favorite is @amazonmp3, Being a huge music lover, I get notified when they put music on sale. Various restaurants like @ruthschris_ARK will let their favorite customers know about certain specials @BluefinMemphis I was traveling through Memphis once looking for a sushi restaurant when they had %50 off if we mentioned we saw them on Twitter.  It was their first attempt at trying to encourage customers in the door, and after talking with the management, it was a huge success 2. Giveaways @namecheap Several companies have started huge marketing campaigns, but my favorite is watching companies post trivia questions, and the first person to respond wins a prize. 3. Responding to Customer Complaints I once posted a complaint about American Express (a company that I have slowly come to really dislike) but they actually had someone contact me to try and resolve the issue.  I give them credit for paying attention, but still dislike them for their horrible credit practices.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Java Spotlight Episode 78: Jasper Potts on the JavaFX Scene Builder

    - by Roger Brinkley
    Tweet An interview with Jasper Potts about the new JavaFX Scene Builder. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JavaFX Scene Builder Developer Preview available for testing. Java EE Unlock the Java EE 6 Platform using NetBeans 7.1 Tuning GlassFish for Production JSF 2.2 Update from Ed Burns John Rose at Microsoft's Lang.NEXT summit Recording of John's Java 8 presentation Jeroen Frijters' presentation on IKVM.NET Martin Odersky's keynote JVM Language Summit 2012 July 30 – August 1; Oracle Santa Clara (same as last year) CFP coming in a few days JVM Language Summit 2011 Presentations & Recordings Proposed development schedule for JDK 8 Say hello to Mathias Axelsson Events April 11, Cleveland JUG, Cleveland, OH April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 17-20, GIDS, Bangalore April 21, Java Summit, Chennai April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India May 5, Bangalore, Pune, ?? - JUG outreach May 7, OTN Developer Day, Mumbai May 8, OTN Developer Day, Delhi Feature InterviewJasper Potts is the Developer Experience Architect for the Java Client Group at Oracle. Responsible for technical design for everything thats sis on the core platform including Controls, Tools, Samples and Blueprints. Formally a lead engineer on the JavaFX & Swing teams working on the new JavaFX UI Controls and Graphics frameworks. Also responsible for designing, developing and presenting demos during the keynotes at JavaOne and Devoxx. A JavaOne Rockstar presenter having presented many sessions on JavaFX and Swing at many conferences. Prior to Sun he founded Xerto a desktop applications company developing Imagery a Java professional photo management application. In this interview Jasper talks about the recently release JavaFX Scene Builder. Mail Bag What’s Cool Contribute to GlassFish in Five Different Ways Stephen Chin and James Weaver join Oracle Adam Bien - Building Java FX 2 Libraries From Source With Maven 3 Paul Sandoz - Java Boomerang Building Jigsaw on Mac OS X using VirtualBox Mandy Chung: Jigsaw for Mac OS X

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • What do you do when you encounter an idiotic interview question?

    - by Senthil
    I was interviewing with a "too proud of my java skills"-looking person. He asked me "What is your knowledge on Java IO classes.. say.. hash maps?" He asked me to write a piece of java code on paper - instantiate a class and call one of the instance's methods. When I was done, he said my program wouldn't run. After 5 minutes of serious thinking, I gave up and asked why. He said I didn't write a main function so it wouldn't run. ON PAPER. [I am too furious to continue with the stupidity...] Believe me it wasn't trick questions or a psychic or anger management evaluation thing. I can tell from his face, he was proud of these questions. That "developer" was supposed to "judge" the candidates. I can think of several things: Hit him with a chair (which I so desperately wanted to) and walk out. Simply walk out. Ridicule him saying he didn't make sense. Politely let him know that he didn't make sense and go on to try and answer the questions. Don't tell him anything, but simply go on to try and answer the questions. So far, I have tried just 4 and 5. It hasn't helped. Unfortunately many candidates seem to do the same and remain polite but this lets these kind of "developers" just keep ascending up the corporate ladder, gradually getting the capacity to pi** off more and more people. How do you handle these interviewers without bursting your veins? What is the proper way to handle this, yet maintain your reputation if other potential employers were to ever get to know what happened here? Is there anything you can do or should you even try to fix this? P.S. Let me admit that my anger has been amplified many times by the facts: He was smiling like you wouldn't believe. I got so many (20 or so) calls from that company the day before, asking me to come to the interview, that I couldn't do any work that day. I wasted a paid day off.

    Read the article

  • Which web framework to use under Backbonejs?

    - by egidra
    For a previous project, I was using Backbonejs alongside Django, but I found out that I didn't use many features from Django. So, I am looking for a lighter framework to use underneath a Backbonejs web app. I never used Django built in templates. When I did, it was to set up the initial index page, but that's all. I did use the user management system that Django provided. I used the models.py, but never views.py. I used urls.py to set up which template the user would hit upon visiting the site. I noticed that the two features that I used most from Django was South and Tastypie, and they aren't even included with Django. Particularly, django-tastypie made it easy for me to link up my frontend models to my backend models. It made it easy to JSONify my front end models and send them to Tastypie. Although, I found myself overriding a lot of tastypie's methods for GET, PUT, POST requests, so it became useless. South made it easy to migrate new changes to the database. Although, I had so much trouble with South. Is there a framework with an easier way of handling database modifications than using South? When using South with multiple people, we had the worse time keeping our databases synced. When someone added a new table and pushed their migration to git, the other two people would spend days trying to use South's automatic migration, but it never worked. I liked how Rails had a manual way of migrating databases. Even though I used Tastypie and South a lot, I found myself not actually liking them because I ended up overriding most Tastypie methods for each Resource, and I also had the worst trouble migrating new tables and columns with South. So, I would like a framework that makes that process easier. Part of my problem was that they are too "magical". Which framework should I use? Nodejs or a lighter Python framework? Which works best with my above criteria?

    Read the article

  • Free training at Northwest Cadence

    - by Martin Hinshelwood
    Even though I have only been at Northwest Cadence for a short time I have already done so much. What I really wanted to do was let you guys know about a bunch of FREE training that NWC offers. These sessions are at a fantastic time for the UK as 9am PST (Seattle time) is around 5pm GMT. Its a fantastic way to finish off your Fridays and with the lack of love for developers in the UK set to continue I would love some of you guys to get some from the US instead. There are really two offerings. The first is something called Coffee talks that take you through an hours worth of detail in a specific category. Coffee Talks These coffee talks have some superb topics and you can get excellent interaction with the presenter as they are kind of informal. Date Day Time Topic Register Here 01/04/11 Tuesday 8:30AM – 9:30AM PST Real World Business and Technical Benefits of ALM with TFS 2010 150656 01/28/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152810 02/11/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152844 02/25/11 Friday 2:00PM - 3:00PM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152816 03/11/11 Friday 9:00AM - 10:00AM PST Lab Manager The Ultimate “No More No Repro” Tool 152809 03/25/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152838 04/08/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152846 04/22/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152839 05/06/11 Friday 2:00PM - 3:00PM PST Real World Business and Technical Benefits of ALM with TFS 2010 150657 05/20/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152842 06/03/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152847 06/17/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152843   ALM Training Engagement Program Microsoft has released a new program to bring free Visual Studio 2010 Training Sessions to select customers on Microsoft Visual Studio products and how Application Lifecycle Management (ALM) solutions can help drive greater business impact. For more details on this program, please see the process chart below.  To get started send an email to us; This training is paid for by Microsoft and you would need to commit to 4 sessions in order to get accepted into the program. So these have more hoops to jump through to get them, but the content is much more formal and centres around adoption.

    Read the article

  • Log Debug Messages without Debug Serial on Shipped Device

    - by Kate Moss' Open Space
    Debug message is one of the ancient but useful way for problem resolving. Message is redirected to PB if KITL is enabled otherwise it goes to default debug port, usually a serial port on most of the platform but it really depends on how OEMWriteDebugString and OEMWriteDebugByte are implemented. For many reasons, we don't want to have a debug serial port, for example, we don't have enough spare serial ports and it can affect the performance. So some of the BSP designers decide to dump the messages into other media, could be a log file, shared memory or any solution that is suitable for the need. In CE 5.0 and previous, OAL and Kernel are linked into one binaries; in the other word, you can use whatever function in kernel, such as SC_CreateFileW to access filesystem in OAL, even this is strongly not recommended. But since the OAL is being a standalone executable in CE 6.0, we no longer can use this back door but only interface exported in NKGlobal which just provides enough for OAL but no more. Accessing filesystem or using sync object to communicate to other drivers or application is even not an option. Sounds like the kernel lock itself up; of course, OAL is in kernel space, you can still do whatever you want to hack into kernel, but once again, it is not only make it a dirty solution but also fragile. So isn't there an elegant solution? Let's see how a debug message print out. In private\winceos\COREOS\nk\kernel\printf.c, the OutputDebugStringW is the one for pumping out the messages; most of the code is for error handling and serialization but what really interesting is the following code piece     if (g_cInterruptsOff) {         OEMWriteDebugString ((unsigned short *)str);     } else {         g_pNKGlobal->pfnWriteDebugString ((unsigned short *)str);     }     CELOG_OutputDebugString(dwActvProcId, dwCurThId, str); It outputs the message to default debug output (is redirected to KITL when available) or OAL when needed but note that highlight part, it also invokes CELOG_OutputDebugString. Follow the thread to private\winceos\COREOS\nk\logger\CeLogInstrumentation.c, this function dump whatever input to CELOG. So whatever the debug message is we always got a clone in CELOG. General speaking, all of the debug message is logged to CELOG already, so what you need to do is using celogflush.exe with CELZONE_DEBUG zone, and then viewing the data using the by Readlog tool. Here are some information about these tools CELOG - http://msdn.microsoft.com/en-us/library/ee479818.aspx READLOG - http://msdn.microsoft.com/en-us/library/ee481220.aspx Also for advanced reader, I encourage you to dig into private\winceos\COREOS\nk\celog\celogdll, the source of CELOG.DLL and use it as a starting point to create a more lightweight debug message logger for your own device!

    Read the article

  • Best Ati Radeon x1200 drivers for 12.10

    - by Jaclyn
    [Long story short is at the bottom if you don't care about my ranting] Ok, well, I have the unfortunate distinction of having an Acer Extensa 4420 (Yes, the model with the faulty motherboard, and no, I do not know how it is still working either). Long story short I need the best drivers for my Ati Radeon x1200 integrated graphics card. The Ati Propriatary drivers for 12.10 no longer supports my video card. Currently when I try to play Minecraft, or any game really, the framerate is quite terrible despite the fact that my handy dandy system load monitor says that I have plenty of memory and CPU power (Mind you, this is when I'm not playing minecraft; that kind of uses up all of my resources unless it's on the worst graphical settings, and even then I have terrible framerate, but plenty of resources left over). I tried the fglrx through a workaround guide, and it completely killed my display and I had to uninstall it. I'm considering just trying to install fglrx through synaptic, but I am hesitant to do so since I don't want a repeat of the BSBDD!!! (Black Screen of Bad Drivers 'o DOOOM!!!), so I will wait until after I get some input from you fine ladies and gentlemen on to what your advice it. ok, so I'm running xubuntu 12.10 64 bit. I upgraded from 10.10 to 11.04. Then about a month ago I went from natty to quantal via the update option in the package updater, and then I decided that I was sick of gnome so I installed xfce. I did not install new drivers from natty to when I wound up at 12.10, so that shouldn't have changed, and they were indeed quite terrible back then, but now I'm using xubuntu as my main os, so I actually need good drivers. uuuugh. Anyway, long story short: I need to know what are the best drivers available for my card, and how to install them, because I am a linux novice, and I have tried everything that I can think of, including search google. Small edit: I forgot to note that since I upgraded to 12.10, my VGA out does not work (I haven't had a chance to try the s-video yet), and possibly related, the USB port on that side of my laptop does not work anymore either.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OpenWorld Day 1

    - by Antony Reynolds
    A Day in the Life of an OpenWorld Attendee Part I Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend! With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong!  I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago.  Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering.  As soon as I got my badge I bumped into a friend from Ireland!  So maybe OpenWorld isn’t so big after all! First port of call was Larrys Keynote.  As always Larry was provocative and thought provoking.  His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features.  His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers. Next day, Monday, was my first full day at OpenWorld.  The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular.  Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning. Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going. Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release.  Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing.  He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier. So much for Day 1.  I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK.  Not bad for Day 1!

    Read the article

  • SQL Server PowerShell Provider follows the Version of PowerShell on the Host and other errata

    - by BuckWoody
    There may be some misunderstanding on how the PowerShell Provider for SQL Server works. I’ve written an article or two explaining that you can use PowerShell with SQL Server, without having the SQL Server 2008 (or higher) provider around. After all, PowerShell just uses .NET, and SQL Server “Server Management Objects” or SMO listen to that interface as well. In SQL Server 2008 and higher we created a “MiniShell” for PowerShell that gives you the ability to treat a SQL Server Instance as a drive (called a “Provider” or path or drive) and a few commands (called command-lets). Using these two simple constructs you can move around SQL Server quickly and work with the objects it holds. I read the other day where someone stated that we had “re-compiled” PowerShell, so that you would have version 1.0 from SQL Server and 2.0 on your new server. Not so! Drop to a SQLPS prompt and a PowerShell prompt and type this in each: $PSVersionTable They should return the same value. You can think of a MiniShell as simply a compiled “profile” that gives you those providers and command-lets automatically – that’s all. In fact, you can load the SMO libraries yourself without the SQL Server 2008 Provider anywhere in sight. I do this all the time, since the MiniShell also has other restrictions. Also remember that if you run a PowerShell script as a SQL Agent Job step type (in 2008 and higher) that you’re running under the context of the account that starts Agent – I think most folks know this, but it’s good to keep in mind. There’s a re-written section of Books Online that goes over working with this very nicely – also covers the question “How to I connect to another server using the SQL Server PowerShell Provider” (hint: It’s just CD) and “How do I load all the SMO stuff if I don’t want to use the Provider” and more. Be sure and check out the note at the bottom that explains the firewall exceptions you’ll need to enable to CD to that remote server. Here’s that link: http://msdn.microsoft.com/en-us/library/cc281947.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Can't start Psychonauts game -- segfault

    - by tremby
    This looks similar to Psychonauts Humble Indie Bundle V error but I don't have the ERROR message (missing GL capability) and its solution does not work for me. When trying to run Psychonauts from the Humble Indie Bundle on my x86_64 laptop running Ubuntu 12.04 I get the following output: <bjn@segnus:/usr/local/games/psychonauts>$ ./Psychonauts STUBBED: fix up the rest of the SSE code first at DetectSSESupport (/home/icculus/projects/psychonauts/Source/CommonLibs/DFMath/MathGeneral.cpp:32) STUBBED: write me? at SetPCLanguage (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:120) STUBBED: fix up the rest of the SSE code first at DetectCPUCaps (/home/icculus/projects/psychonauts/Source/game/luatest/Game/PCGameApp.cpp:223) STUBBED: check LANG envr var at _GetDefaultGameLanguage (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:171) Console created Save path: /home/bjn/.local/share/Psychonauts Write path: WorkResource STUBBED: inline asm at SSEMul_4x4_4x4_2arg (/home/icculus/projects/psychonauts/Source/CommonLibs/DFMath/Matrix.cpp:710) STUBBED: inline asm at SSEMul_4x4_4x4_3arg (/home/icculus/projects/psychonauts/Source/CommonLibs/DFMath/Matrix.cpp:698) ******** unit test failed ******** STUBBED: VK_* at InitInputNames (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1220) No joysticks detected Transport started DaveD: NCListenSocket: Listening on port 40001 SDL_SetVideoMode() failed: Failed loading libGL.so.1 Start Up completed in 0.06 seconds [1] 9718 segmentation fault (core dumped) ./Psychonauts <bjn@segnus:/usr/local/games/psychonauts>$ Output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M-E LPC Interface Controller (rev 03) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 05) 02:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 22) 0c:00.0 Network controller: Intel Corporation WiFi Link 5100 Any ideas?

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Handling permissions in a MVP application

    - by Chathuranga
    In a windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows (permission based) as basically its implementation should be less complicated and straight forward. NOTE : System could be simultaneously used by few users (maximum 3) and the database is at the server side. This is my UserModel. Each user has a list of permissions given for them. class User { string UserID { get; set; } string Name { get; set; } string NIC {get;set;} string Designation { get; set; } string PassWord { get; set; } List <string> PermissionList = new List<string>(); bool status { get; set; } DateTime EnteredDate { get; set; } } When user login to the system it will keep the current user in memory. For example in BankAccountDetailEntering view I control the controller permission as follows. public partial class BankAccountDetailEntering : Form { bool AccountEditable {get; set;} private void BankAccountDetailEntering_Load(object sender, EventArgs e) { cmdEditAccount.enabled = false; OnLoadForm (sender, e); // Event fires... If (AccountEditable ) { cmdEditAccount.enabled=true; } } } In this purpose my all relevant presenters (like BankAccountDetailPresenter) should aware of UserModel as well in addition to the corresponding business Model it is presenting to the View. class BankAccountDetailPresenter { BankAccountDetailEntering _View; BankAccount _Model; User _UserModel; DataService _DataService; BankAccountDetailPresenter( BankAccountDetailEntering view, BankAccount model, User userModel, DataService dataService ) { _View=view; _Model = model; _UserModel = userModel; _DataService = dataService; WireUpEvents(); } private void WireUpEvents() { _View.OnLoadForm += new EventHandler(_View_OnLoadForm); } private void _View_OnLoadForm(Object sender, EventArgs e) { foreach(string s in _UserModel.PermissionList) { If( s =="CanEditAccount") { _View.AccountEditable =true; return; } } } public Show() { _View.ShowDialog(); } } So I'm handling the user permissions in the presenter iterating through the list. Should this be performed in the Presenter or View? Any other more promising ways to do this? Thanks.

    Read the article

  • Tap into MySQL's Amazing Performance Results with the Performance Tuning Course

    - by Antoinette O'Sullivan
    Want to leverage the high-speed load utilities, distinctive memory caches, full text indexes, and other performance-enhancing mechanisms that MySQL offers to fuel today's critical business systems. The authentic MySQL Performance Tuning course, in 4 days, teaches you to evaluate the MySQL architecture, learn to use the tools, configure the database for performance, tune application and SQL code, tune the server, examine the storage engines, assess the application architecture, and learn general tuning concepts. You can take this course in one the following three ways: Training-on-Demand: Access the streaming video, instructor delivery of this course from your own desk, at your own pace. Book time for hands-on practice when it suits you. Live-Virtual Class: Take this instructor-led class live from your own desk. With 700 events on the schedule you are sure to find a time and date to suit you! In-Class: Travel to a classroom to take this class. A sample of events on the schedule are as follows.  Location  Date  Delivery Language  Hamburg, Germany  22 October 2012  German  Prague, Czech Republic  1 October 2012  Czech  Warsaw, Poland  3 December 2012  Polish  London, England  19 November 2012  English  Rome, Italy  23 October 2012  Italian Lisbon, Portugal  6 November 2012  European Portugese  Aix en Provence, France  4 September 2012   French  Strasbourg, France 16 October 2012   French  Nieuwegein, Netherlands 26 November 2012   Dutch  Madrid, Spain 17 December 2012   Spanish  Mechelen, Belgium  1 October 2012  English  Riga, Latvia  10 December 2012  Latvian  Petaling Jaya, Malaysia  10 September 2012 English   Edmonton, Canada 10 December 2012   English  Vancouver, Canada 10 December 2012   English  Ottawa, Canada 26 November 2012   English  Toronto, Canada 26 November 2012   English  Montreal, Canada 26 November 2012   English  Mexico City, Mexico 10 September 2012   Spanish  Sao Paolo, Brazil 26 November 2012  Brazilian Portugese   Tokyo, Japan 19 November 2012   Japanese  Tokyo, Japan  19 November 2012  Japanese For further information on this class, or to register your interest in additional events, go to the Oracle University Portal: http://oracle.com/education/mysql

    Read the article

  • Congratulations to the 2012 Oracle Spatial Award Winners!

    - by Mandy Ho
    I just returned from the 2012 Location Intelligence and Oracle Spatial User conference in Washington, DC, held by Directions Magazine. It was a great conference with presentations from across the country and globe, networking with Oracle Spatial users and meeting new customers and partners. As part of the yearly event, Oracle recognizes special customers and partners for their contributions to advancing mainstream solutions using geospatial technology. This was the 8th year that Oracle has recognized innovative, industry leaders.   The awards were given in three categories: Education/Research, Innovator and Partnership. Here's a little on each of the award winners. Education and Research Award Winner: Technical University of Berlin The Institute for Geodesy and Geoinformation Science of the Technical University of Berlin (TU Berlin) was selected for its leading research work in mapping of urban and regional space onto virtual 3D-city and landscape models, and use of Oracle Spatial, including 3D Vector and Georaster type support, as the data management platform. Innovator Award Winner:  Istanbul Metropolitan Municipality Istanbul is the 3rd largest metropolitan area in Europe. One of their greatest challenges is organizing efficient public transportation for citizens and visitors. There are 15 types of transportations organized by 8 different agencies. To solve this problem, the Directorate of GIS of Istanbul Metropolitan Municipality has created a multi-model itinerary system to help citizens in their decision process for using public transport or their private cars. They choose to use Oracle Spatial Network Model as the solution in our system together with Java and SOAP web services.  Partnership Award Winners: CSoft Group and OSCARS. The Partnership award is given to the ISV or integrator who have demonstrated outstanding achievements in partnering with Oracle on the development side, in taking solutions to market.  CSoft Group- the largest Russion integrator and consultancy provider in CAD and GIS. CSoft was selected by the Oracle Spatial product development organization for the key role in delivering geospatial solutions based on Oracle Database and Fusion Middleware to the Russian market. OSCARS - Provides consulting/training in France, Belgium and Luxembourg. With only 3 full time staff, they have achieved significant success with leading edge customer implementations leveraging the latest Oracle Spatial/MapViewer technologies, and delivering training throughout Europe.  Finally, we also awarded two Special Recognition awards for two partners that helped contribute to the Oracle Partner Network Spatial Specialization. These two partners provided insight and technical expertise from a partner perspective to help launch the new certification program for Oracle Spatial Technologies. Award Winners: ThinkHuddle and OSCARS  For more pictures on the conference and the awards, visit our facebook page: http://www.facebook.com/OracleDatabase

    Read the article

  • Oracle BPM: Adding an attachment during the Human Task Initialization

    - by kyap
    Recently I had the requirement from a customer to instantiate a Human Task, which can accept a payload containing a binary attribute (base64) representing an actual document. According to the same requirement, this attribute should be shown as a hyperlink in the Worklist UI to the assignee(s), from which the assignees can download the document on the local machine for review. Multiple options have been leverage, but most required heavy customization.  In order to leverage as much as possible Oracle BPM out-of-the box functionalities, I decided to add this document as a readonly attachment. We can easily achieve this operation within Worklist Application, but it is a bit more challenging when we want to attach the document during the Human Task initialization.  After some investigations (on BPM 11g PS4FP and PS5), here's the way to go: 1. Create an asynchronous BPM process, and use this xsd to create 2 Business Objects FullPayload and PartialPayload : 2. Create 2 process variables 'vFullPayload' and 'vPartialPayload' using this Business Objects created above 3. Implement the Start Event with the initial Data Association, with an input argument using 'FullPayload' Business Object type 4. Drag in an User Task into the process. Implement the User Task as usual by using 'vPartialPayload' type as the input type and assign the task to your favorite tester (mine is jcooper) 5. Here's the main course - Start the Data Association and map the payload into 'execData' as follow: FROM TO  vFullPayload.attachment.mimetype  execData.attachment[1].mimeType  vFullPayload.attachment.filename  execData.attachment[1].name  bpmn:getDataObject('vFullPayload')/ns:attachment/ns:content  execData.attachment[1].content  'BPM'  execData.attachment[1].attachmentScope false()  execData.attachment[1].doesBelongToParent 'weblogic'  execData.attachment[1].updateBy  xp20:current-dateTime()  execData.attachment[1].updateDate (Note: Check the <Humantask>WorkflowTask.xsd file in your project xsd folder to discover the different options for attachmentScope & storageType) 6. Your process is completed. Just build a standard ADF UI and deploy the process/UI onto your BPM Server for the testing. Here's an example, with a base64 encoded pdf file: application-pdf.txt 7. Finally, go to the BPM Worklist application to check the result ! Please note that Oracle BPM, by default, limits the attachment document size to 2Mb. If you are planning to have bigger attachments in your process, it is recommended to store your documents in a Content Management server (such as Oracle UCM) and pass the reference instead. It is possible to configure Oracle BPM to store attachment directly into Oracle UCM too, and I believe we can use the storageType, ucmMetadataItem attributes for this purpose.... I will confirm once I have access onto an Oracle UCM for the testing :)

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >