Search Results

Search found 3122 results on 125 pages for 'dependency replicator'.

Page 97/125 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Mysql 5.5 server not working

    - by rajesh
    I had Ubuntu 14.04 installed on my system. I recently updated ubuntu and now my mysql does not start and workbench says that mysql server has been stopped. And when i try to start it gives me the following error 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running 2014-08-12 23:02:04 - Server start done. 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running And also when i try to login using terminal (mysql -u root -p <password>) i get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I have also tried to reinstall Ubuntu but i am unable to do so. Gives me the following error: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server-5.5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. I have data which i have not taken backup of as i am unable to log into the server. I am a newbie please help me resolve this issue without losing my data. Awaiting for your earliest response. Below is the error message from cat /var/log/mysql/error.log 140813 21:22:50 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140813 21:22:50 [Note] Plugin 'FEDERATED' is disabled. 140813 21:22:50 InnoDB: The InnoDB memory heap is disabled 140813 21:22:50 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140813 21:22:50 InnoDB: Compressed tables use zlib 1.2.8 140813 21:22:50 InnoDB: Using Linux native AIO 140813 21:22:50 InnoDB: Initializing buffer pool, size = 128.0M 140813 21:22:50 InnoDB: Completed initialization of buffer pool 140813 21:22:50 InnoDB: highest supported file format is Barracuda. 140813 21:22:50 InnoDB: Waiting for the background threads to start 140813 21:22:51 InnoDB: 5.5.38 started; log sequence number 80726593570 140813 21:22:51 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 140813 21:22:51 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 140813 21:22:51 [Note] Server socket created on IP: '127.0.0.1'. 140813 21:22:51 [ERROR] Fatal error: Can't open and lock privilege tables: Incorrect file format 'user'

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • WebCenter Customer Spotlight: College of American Pathologists

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary College of American Pathologists Goes Live with OracleWebCenter - Imaging, AP Invoice Automation, and EBS Managed Attachment with Support for Imaging ContentThe College of American Pathologists (CAP), the leading organization of board-certified pathologists serving more then 18,000 physician members, 7,000 laboratories are accredited by the CAP, and approximately 22,000 laboratories are enrolled in the College’s proficiency testing programs. The business objective was to content-enable their Oracle E-Business Suite (EBS) enterprise application by combining the best of Imaging and Manage Attachment functionality providing a unique opportunity for the business to have unprecedented access to both structure and unstructured content from within their enterprise application. The solution improves customer services turnaround time, provides better compliance and improves maintenance and management of the technology infrastructure. Company OverviewThe College of American Pathologists (CAP), celebrating 50 years as the gold standard in laboratory accreditation, is a medical society serving more than 17,000 physician members and the global laboratory community. It is the world’s largest association composed exclusively of board certified pathologists and is the worldwide leader in laboratory quality assurance. The College advocates accountable, high-quality, and cost-effective patient care. The more than 17,000 pathologist members of the College of American Pathologists represent board-certified pathologists and pathologists in training worldwide. More than 7,000 laboratories are accredited by the CAP, and approximately 23,000 laboratories are enrolled in the College’s proficiency testing programs.  Business ChallengesThe CAP business objective was to content-enable their Oracle E-Business Suite (EBS) enterprise application by combining the best of Imaging and Manage Attachment functionality providing a unique opportunity for the business to have unprecedented access to both structure and unstructured content from within their enterprise application.  Bring more flexibility to systems and programs in order to adapt quickly Get a 360 degree view of the customer Reduce cost of running the business Solution DeployedWith the help of Oracle Consulting, the customer implemented Oracle WebCenter Content as the centralized E-Business Suite Document Repository.  The solution enables to capture, present and manage all unstructured content (PDFs,word processing documents, scanned images, etc.) related to Oracle E-Business Suite transactions and exposing the related content using the familiar EBS User Interface. Business ResultsThe CAP achieved following benefits from the implemented solution: Managed Attachment Solution Align with strategic Oracle Fusion Middleware platform Integrate with the CAP existing data capture capabilities Single user interface provided by the Managed Attachment solution for all content Better compliance and improved collaboration  Account Payables Invoice Processing Imaging Solution Automated invoice management eliminating dependency on paper materials and improving compliance, collaboration and accuracy A single repository to house and secure scanned invoices and all supplemental documents Greater management visibility of invoice entry process Additional Information CAP OpenWorld Presentation Oracle WebCenter Content Oracle Webcenter Capture Oracle WebCenter Imaging Oracle  Consulting

    Read the article

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • apt-get install is not able to access /etc

    - by HorusKol
    I put together an ubuntu 12.04 server a couple of weeks ago and everything seemed fine until this morning. Suddenly, I'm having trouble installing new packages - at first I thought there was something wrong with tinyproxy and so I tried installing squid instead. However, I get similar results: Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf".\ ... /var/lib/dpkg/info/squid3.postinst: 1: /var/lib/dpkg/info/squid3.postinst: cannot open /etc/squid3/squid.conf: No such file It seems that apt-get is not creating the configuration files needed for these programs. I haven't modified any configuration or user groups since the last successful update/install of packages. /etc is present, and is populated with a nice healthy tree of configuration files. It is owned and grouped to root, and has the properties drwxr-xr-x - all the files and folders inside seem to be fine to, as far as I can tell. I've even been able to edit/save a couple as sudo. Full output from installing tinyproxy: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: tinyproxy 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/61.6 kB of archives. After this operation, 201 kB of additional disk space will be used. Selecting previously unselected package tinyproxy. (Reading database ... 58916 files and directories currently installed.) Unpacking tinyproxy (from .../tinyproxy_1.8.3-1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up tinyproxy (1.8.3-1) ... Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf". invoke-rc.d: initscript tinyproxy, action "start" failed. dpkg: error processing tinyproxy (--configure): subprocess installed post-installation script returned error exit status 70 Errors were encountered while processing: tinyproxy E: Sub-process /usr/bin/dpkg returned an error code (1) Result of strace after installation: 18467 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 18467 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 18467 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 18467 open("/etc/tinyproxy.conf", O_RDONLY) = -1 ENOENT (No such file or directory)

    Read the article

  • What are the software design essentials? [closed]

    - by Craig Schwarze
    I've decided to create a 1 page "cheat sheet" of essential software design principles for my programmers. It doesn't explain the principles in any great depth, but is simply there as a reference and a reminder. Here's what I've come up with - I would welcome your comments. What have I left out? What have I explained poorly? What is there that shouldn't be? Basic Design Principles The Principle of Least Surprise – your solution should be obvious, predictable and consistent. Keep It Simple Stupid (KISS) - the simplest solution is usually the best one. You Ain’t Gonna Need It (YAGNI) - create a solution for the current problem rather than what might happen in the future. Don’t Repeat Yourself (DRY) - rigorously remove duplication from your design and code. Advanced Design Principles Program to an interface, not an implementation – Don’t declare variables to be of a particular concrete class. Rather, declare them to an interface, and instantiate them using a creational pattern. Favour composition over inheritance – Don’t overuse inheritance. In most cases, rich behaviour is best added by instantiating objects, rather than inheriting from classes. Strive for loosely coupled designs – Minimise the interdependencies between objects. They should be able to interact with minimal knowledge of each other via small, tightly defined interfaces. Principle of Least Knowledge – Also called the “Law of Demeter”, and is colloquially summarised as “Only talk to your friends”. Specifically, a method in an object should only invoke methods on the object itself, objects passed as a parameter to the method, any object the method creates, any components of the object. SOLID Design Principles Single Responsibility Principle – Each class should have one well defined purpose, and only one reason to change. This reduces the fragility of your code, and makes it much more maintainable. Open/Close Principle – A class should be open to extension, but closed to modification. In practice, this means extracting the code that is most likely to change to another class, and then injecting it as required via an appropriate pattern. Liskov Substitution Principle – Subtypes must be substitutable for their base types. Essentially, get your inheritance right. In the classic example, type square should not inherit from type rectangle, as they have different properties (you can independently set the sides of a rectangle). Instead, both should inherit from type shape. Interface Segregation Principle – Clients should not be forced to depend upon methods they do not use. Don’t have fat interfaces, rather split them up into smaller, behaviour centric interfaces. Dependency Inversion Principle – There are two parts to this principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In modern development, this is often handled by an IoC (Inversion of Control) container.

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • Access-based Enumeration (December 04, 2009)

    - by user12612012
    Access-based Enumeration (ABE) is another recent addition to the Solaris CIFS Service - delivered into snv_124.  Designed to be compatible with Windows ABE, which was introduced in Windows Server 2003 SP1, this feature filters directory content based on the user browsing the directory.  Each user can only see the files and directories to which they have access.  This can be useful to implement an out-of-sight, out-of-mind policy or simply to reduce the number of files presented to each user - to make it easier to find files in directories containing a large number of files. ABE is managed on a per share basis by a new boolean share property called, as you might imagine, abe, which is described insharemgr(1M).  When set to true, ABE filtering is enabled on the share and directory entries to which the user has no access will be omitted from directory listings returned to the client.  When set to false or not defined, ABE filtering will not be performed on the share.  The abe property is not defined by default.Administration is straightforward, for example: # zfs sharesmb=abe=true,name=jane tank/home/jane# sharemgr show -vp    zfs       zfs/tank/home/jane nfs=() smb=()          jane=/export/home/jane     smb=(abe="true") ABE is also supported via sharemgr(1M) and on smbautohome(4) shares. Note that even though a file is visible in a share, with ABE enabled, it doesn't automatically mean that the user will always be able to open the file.  If a user has read attribute access to a file ABE will show the it but access will be denied if this user tries to open the file for reading or writing. We considered supporting ABE on NFS shares, as suggested by the name of PSARC/2009/375, but we ran into problems due to NFS client readdir caching.  NFS clients maintain a common directory entry cache for all users, which not only defeats the intent of ABE but can lead to very confusing results.  If multiple users are looking at the content of a directory with ABE enabled, the entries that get cached will depend on who looks at the directory first.  Subsequent users may see files that ABE on the server would have filtered out or files may be missing because they were filtered out for the original user. Although this issue can be resolved by disabling the NFS client readdir cache, this was deemed to be an unsuitable solution because it would create a dependency between a server share property and the configuration on all NFS clients, and there was the potential for differences in behavior across the various NFS clients.  It just seemed to add unnecessary administration complexity so we pulled it out. References for more information PSARC/2009/246 ZFS support for Access Based Enumeration PSARC/2009/375 ABE share property for NFS and SMB 6802734 Support for Access Based Enumeration 6802736 SMB share support for Access Based Enumeration Windows Access-based Enumeration

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)

    - by Ankit G
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Delighted to announce the GA of EM Cloud Control Release 3 on all supported platforms. This release includes a new 12.1.0.3 version of platform (OMS & Agent), along with revised new versions of several Plug-ins and Metadata plug-ins (including a brand new Metadata plug-in for Oracle Virtual Networking). This release marks yet another major & significant milestone for Enterprise Manager 12c Cloud Control product releases. Following shows the list of new plug-ins versions available along the Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3). The new plug-ins have dependency on 12.1.0.3 platform, and customer needs to be on minimum 12.1.0.3 platform (OMS/Agent) version of the product before being able to deploy/use these plug-in versions. (In other words, the new plug-in versions cannot be deployed, unless Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) is installed or upgraded to). Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) release includes tons of new features, along with several stability and performance bug fixes and is available for download for all platforms from OTN:Installation/Upgrade paths: EM Customers can do a fresh installation using "Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)", and will get the latest version of the platform, along with all the latest versions of plug-ins and Metadata plug-ins out of the box. EM Customers who are on Release1 (12.1.0.1+BP1) or Release 2 (12.1.0.2), or on older releases 11g and 10.2.0.5, can choose to use Oracle Enterprise Manager Cloud Control 12c Release 3 bits, to upgrade directly to the latest Release 3, and the plug-ins will be automatically upgraded to the latest versions. Enterprise Manager Certification Matrix is also now available on My Oracle Support - here.

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • Error installing RVM

    - by Dbugger
    I am following this guide, but this is the output I receive. What am the problem? dbugger@mercury:~$ \curl -sSL https://get.rvm.io | bash -s stable --rails Downloading https://github.com/wayneeseguin/rvm/archive/stable.tar.gz Upgrading the RVM installation in /home/dbugger/.rvm/ RVM PATH line found in /home/dbugger/.profile /home/dbugger/.bashrc /home/dbugger/.zshrc. RVM sourcing line found in /home/dbugger/.bash_profile /home/dbugger/.zlogin. Upgrade of RVM in /home/dbugger/.rvm/ is complete. # Enrique, # # Thank you for using RVM! # We sincerely hope that RVM helps to make your life easier and more enjoyable!!! # # ~Wayne, Michal & team. In case of problems: http://rvm.io/help and https://twitter.com/rvm_io Upgrade Notes: * No new notes to display. rvm 1.25.27 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] Searching for binary rubies, this might take some time. No binary rubies available for: ubuntu/14.04/x86_64/ruby-2.1.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for ubuntu. Installing requirements for ubuntu. Updating system.......... Installing required packages: gawk, libreadline6-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3.... Error running 'requirements_debian_libs_install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3', showing last 15 lines of /home/dbugger/.rvm/log/1401804140_ruby-2.1.2/package_install_gawk_libreadline6-dev_libssl-dev_libyaml-dev_libsqlite3-dev_sqlite3.log ++ /scripts/functions/utility : __rvm_try_sudo() 405 > sudo -p '%p password required for '\''apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3'\'': ' apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2) but 1.0.1f-1ubuntu2.1 is to be installed E: Unable to correct problems, you have held broken packages. ++ /scripts/functions/utility : __rvm_try_sudo() 405 > return 100 ++ /scripts/functions/requirements/ubuntu : requirements_debian_libs_install() 36 > return 100 Requirements installation failed with status: 100.

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • Advice for how to handle company pride

    - by user17971
    We have this "amazing" little product using the latest development methodologies, components with all the bells and whistles. I took over this product maybe 6 months ago and struggled with it from day one. Even though it is supposedly is state of the art because of all its amazing structure, using dependency injections, inversion of control from the unity framework, hibernation and is domain driven in a .net mvvm xaml application to make it streamlined and modular. I knew from the moment I saw the monolith that it was going to be an uphill struggle for me. A lot of little code-bits scattered all around in neatly organized paradigms. Debugging is difficult, tracing the code is difficult, making new code is difficult, although some modifications is surprisinly easy but it doesn't out weight the problems I have with the code by a long shot. When I took over the project I was told that the new management console was ready for delivery and all I had to do was compile it and drop it. This was the beginning of a uphill struggle, our customer didn't agree at all that this was the functionality they had asked for so I had to do modifications to the program to their specifications. Since the project pretty much has been overdue since I took over it it has always been important that we didn't add or change much to the original system. I could modify the existing bits. fast forward until today where I finally completed all their comments and issues with the program but now I think that the users has opened their eyes (even though they saw this program many times) that they will be going backwards with this new system, that it will be much worse than the tool they got today (for a long time due to the fact that I'm the only resource on the project, project manager, tester, developer, integration specialist etc) My problem is that I lost faith in this system quite early due to the nature of the program. Although I made many changes and improvements to the system I wholeheartedly sympathize with the poor users who are going to start using this system. Its not nearly doing all the things it should do. I had this conversation internally with my boss where I told him what I thought about it, that if I were the customer I wouldn't have spent money developing it. So what do I do now? The system in ready, on a staging system and nobody likes it, its too slow and boring and does maybe do 50% of what they need it to do. Despite how much energy and working around the clock I've done to this project: I won't mind scrapping the system but we've spent much money (well my salaries) developing it and my company wants us to be proud of everything we do and advocate it. How will I tackle the contractor when he asks for advice? Surely I can tell him, this is what we agreed upon based on your use case scenarios, and be done with it? How will I inform my boss about this progress? He knows what I feel about it but I always get the feeling he let my criticism pass him by as just hot air, gone tomorrow,.

    Read the article

  • apt-get upgrade stuck at the same package (openjdk-6-jre-headless)

    - by decibyte
    I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do?

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >