Search Results

Search found 1855 results on 75 pages for 'weak linking'.

Page 68/75 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • explain notifier.c from the Linux kernel

    - by apollon
    I'm seeking to fully understand the following code snippet from kernel/notifier.c. I have read and built simple link lists and think I get the construct from K&R's C programming. The second line below which begins with the 'int' appears to be two items together which is unclear. The first is the (*notifier_call) which I believe has independent but related significance with the second containing a 'notifier block' term. Can you explain how it works in detail? I understand that there is a function pointer and multiple subscribers possible. But I lack the way to tie these facts together, and could use a primer or key so I exactly understand how the code works. The third line looks to contain the linking structure, or recursive nature. Forgive my terms, and correct them as fit as I am a new student of computer science terminology. struct notifier_block { int (*notifier_call)(struct notifier_block *, unsigned long, void *); struct notifier_block *next; int priority; };

    Read the article

  • Duplicate information from sql result

    - by puddleJumper
    I looked in about 18 other posts on here an most people are asking how to delete the records not just hide them. So my problem: I have a database with staff members who are associated with locations. Many of the staff members are associated with more than one location. What I want to do is to only display the first location listed in the mysql result and skip over the others. I have the sql query linking the tables together and it works aside from it showing the same information for those staff members that are in those other locations multiple times so example would be like this: This is the sql statement I have currently SELECT staff_tbl.staffID, staff_tbl.firstName, staff_tbl.middleInitial, staff_tbl.lastName, location_tbl.locationID, location_tbl.staffID, officelocations_tbl.locationID, officelocations_tbl.officeName, staff_title_tbl.title_ID, staff_title_tbl.staff_ID, titles_tbl.titleID, titles_tbl.titleName FROM staff_tbl INNER JOIN location_tbl ON location_tbl.staffID = staff_tbl.staffID INNER JOIN officelocations_tbl ON location_tbl.locationID = officelocations_tbl.locationID INNER JOIN staff_title_tbl ON staff_title_tbl.staff_ID = staff_tbl.staffID INNER JOIN titles_tbl ON staff_title_tbl.title_ID = titles_tbl.titleID and my php is <?php do { ?> <tr> <td><?php echo $row_rs_Staff_Info['firstName']; ?>&nbsp; <?php echo $row_rs_Staff_Info['lastName']; ?></td> <td><?php echo $row_rs_Staff_Info['titleName']; ?>&nbsp; </td> <td><?php echo $row_rs_Staff_Info['officeName']; ?>&nbsp; </td> </tr> <?php } while ($row_mysqlResult = mysql_fetch_assoc($rs_mysqlResult)); ?> What I would like to know is there a way using php to select only the first entry listed for each person and display that and just skip over the other two. I was thinking it could be done by possibly adding the staffID's to an array and if they are in there to skip over the next one listed in the staff_title_tbl but wasn't quite sure how to write it that way. Any help would be great thank you in advance.

    Read the article

  • After compiling PHP, I get mod_fcgid: error reading data from FastCGI server

    - by user34295
    I'm trying to add multiple PHP version in Plesk 12. Switching my domain to the new version PHP 5.4.29 result in this error: (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server Here is phpinfo() of the complied PHP version, obtained running php54-cgi index.php from the terminal. The same script placed under document root doesn't work in FastCGI. How can I debug/try to figure out what's the error? Currently running CentOS 6.5 x64, Plesk v12.0.18_build1200140529.2, PHP 5.5.13. I've downloaded PHP 5.4.29: cd /usr/local/src curl -O http://it1.php.net/distributions/php-5.4.29.tar.gz cd php-5.4.29 And configured with: ./configure \ --prefix=/usr/local/php54 \ --with-bz2 \ --with-config-file-path=/usr/local/php54/etc \ --with-config-file-scan-dir=/usr/local/php54/etc/php.d \ --with-curl \ --with-gd \ --with-gettext \ --with-iconv \ --with-layout=PHP \ --with-libxml-dir=/usr/local/php54 \ --with-mhash \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-openssl \ --with-pdo-mysql=mysqlnd \ --with-readline \ --with-xsl \ --with-zlib \ --enable-calendar \ --enable-cgi \ --enable-exif \ --enable-ftp \ --enable-intl \ --enable-mbstring \ --enable-pcntl \ --enable-shmop \ --enable-sockets \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --enable-wddx \ --enable-zip Then: make && make install Installing PHP CLI binary: /usr/local/php54/bin/ Installing PHP CLI man page: /usr/local/php54/php/man/man1/ Installing PHP CGI binary: /usr/local/php54/bin/ Installing PHP CGI man page: /usr/local/php54/php/man/man1/ Installing build environment: /usr/local/php54/lib/php/build/ Installing header files: /usr/local/php54/include/php/ Installing helper programs: /usr/local/php54/bin/ program: phpize program: php-config Installing man pages: /usr/local/php54/php/man/man1/ page: phpize.1 page: php-config.1 Installing PEAR environment: /usr/local/php54/lib/php/ [PEAR] Archive_Tar - installed: 1.3.11 [PEAR] Console_Getopt - installed: 1.3.1 warning: pear/PEAR requires package "pear/Structures_Graph" (recommended version 1.0.4) warning: pear/PEAR requires package "pear/XML_Util" (recommended version 1.2.1) [PEAR] PEAR - installed: 1.9.4 Wrote PEAR system config file at: /usr/local/php54/etc/pear.conf You may want to add: /usr/local/php54/lib/php to your php.ini include_path [PEAR] Structures_Graph- installed: 1.0.4 [PEAR] XML_Util - installed: 1.2.1 /usr/local/src/php-5.4.29/build/shtool install -c ext/phar/phar.phar /usr/local/php54/bin ln -s -f /usr/local/php54/bin/phar.phar /usr/local/php54/bin/phar Installing PDO headers: /usr/local/php54/include/php/ext/pdo/ Copied php.ini-production to /usr/local/php54/etc/php.ini and added a new handler in Plesk: /usr/local/psa/bin/php_handler --add -displayname 5.4.29 -path /usr/local/php54/bin/php-cgi -phpini /usr/local/php54/etc/php.ini -type fastcgi -id php54 Symbolic linking: ln -s /usr/local/php54/bin/php /usr/local/bin/php54 ln -s /usr/local/php54/bin/php-cgi /usr/local/bin/php54-cgi New installed version: php54-cgi -m [PHP Modules] bz2 calendar cgi-fcgi Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext hash iconv intl json libxml mbstring mhash mysql mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix readline Reflection session shmop SimpleXML sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter xsl zip zlib [Zend Modules]

    Read the article

  • How to change key mappings in Cygwin's Vim

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Edit 1: OK, it seems, that I got a first trace: The terminal sends for <C-Left> '^[[1;5D', and for right '^[[1;5C' (evaluated with the <C-V><C-Left> trick). If vim interprets this literally and discards the first characters, it explains the strange behaviour. Any ideas, how I could change this key mapping? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • How to find out Vim's currently mapped commandos

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Vim digraphs is not working

    - by Hauleth
    I've tried to input digraphs in Vim (without vi compatibility), and unfortunately I can't. After using ControlKP * it should input big greek pi letter. I've also tried using PBS * with :set digraph, but no success either. My gvim --version output: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 23 2012 18:42:18) Included patches: 1-712 Compiled by ArchLinux Big version with GTK2 GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua +menu +mksession +modify_fname +mouse +mouseshape +mouse_dec +mouse_gpm -mouse_jsbterm +mouse_netterm +mouse_sgr -mouse_sysmouse +mouse_urxvt +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra +perl +persistent_undo +postscript +printer -profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim +xsmp_interact +xterm_clipboard -xterm_save system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng15 -I/usr/local/include -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-O1,--sort-common,--as-needed,-z,relro -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXt -lX11 -lXdmcp -lSM -lICE -lm -lncurses -lnsl -lacl -lattr -lgpm -ldl -L/usr/lib -llua -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -fstack-protector -L/usr/local/lib -L/usr/lib/perl5/core_perl/CORE -lperl -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -L/usr/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -lruby -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib Can you tell me how to get my ControlK digraphs work? EDIT: Output of :verbose set enc? fenc? is: encoding=utf-8 Last set from ~/.vimrc fileencoding=

    Read the article

  • Create and manage child name servers (glue records) within my domain?

    - by basilmir
    Preface I use a top level domain provider that only allows me to add "normal" third-party name servers (a list where i can add "ns1.hostingcompany.com" type entries... nothing else) AND "child name servers" which i can later attach to my parent account ( ns1.myowndomain.com and an ip address). They do not provide other means of linking up. I want to host my own server and dns, even with just one name server (at first). My setup: Airport Extreme - get's a static ip address from my ISP Mac Mini Server - sits behind the Airport and get's a 10.0.1.2 My problem is that i can't seem to configure DNS correctly. I added a "child nameserver" with my airport's external static ip address at the top level provider, so to my understanding i should have all DNS traffic redirected to my Airport. I've opened port 53 UDP to let the traffic in. Now, what i don't get is this. My Mini Server is sitting on a 10.0.1.2 address and i have setup dns correctly, with an A record to point and resolve my server AND a reverse lookup to that 10.0.1.2. So it's ok for "internal stuff". Here is the clicker... How, when a request comes from the exterior for a reverse lookup, does the server "know" ... well look i have everything in 10.0.1.2 but the guy outside needs something from my real address. I can't begin to describe the MX record bonanza... How do i set this "right"? Do i "need" my Mini Server to sit on the external address directly (i can see how this could be the preferred solution, being close to a "real" server i have in my mind). If not... do i need a PTR record on the 10.0.1.2 server but with the external address in there? My dream: I will extend this "setup" with multiple Mini's in different cities where i work. I want a distributed something (Xgrid comes to mind). PS. Be gentle, i've read 2 books and the subject, and bought both the Lynda Essentials and DNS and Networking to boot, still i'm far from being on top of things.

    Read the article

  • cakephp & nginx config/rewrite rules

    - by seanl
    Hi somebody please help me out, I've asked this at stackoverflow as well but not got much of a response and was debating whether it was programming or server related. I’m trying to setup a cakephp environment on a Centos server running Nginx with Fact CGI. I already have a wordpress site running on the server and a phpmyadmin site so I have PHP configured correctly. My problem is that I cannot get the rewrite rules setup correct in my vhost so that cake renders pages correctly i.e. with styling and so on. I’ve googled as much as possible and the main consensus from the sites like the one listed below is that I need to have the following rewrite rule in place location / { root /var/www/sites/somedomain.com/current; index index.php index.html; # If the file exists as a static file serve it # directly without running all # the other rewrite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/(.+)$ /index.php?url=$1 last; break; } } http://blog.getintheloop.eu/2008/4/17/nginx-engine-x-rewrite-rules-for-cakephp problem is these rewrite assume you run cake directly out of the webroot which is not what I want to do. I have a standard setup for each site i.e. one folder per site containing the following folders log, backup, private and public. Public being where nginx is looking for its files to serve but I have cake installed in private with a symlink in public linking back to /private/cake/ this is my vhost server { listen 80; server_name app.domain.com; access_log /home/public_html/app.domain.com/log/access.log; error_log /home/public_html/app.domain.com/log/error.log; #configure Cake app to run in a sub-directory #Cake install is not in root, but elsewhere and configured #in APP/webroot/index.php** location /home/public_html/app.domain.com/private/cake { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/private/cake/$1 last; break; } } location /home/public_html/app.domain.com/private/cake/ { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/public/index.php?url=$1 last; break; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/public_html/app.domain.com/private/cake$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Now like I said I can see the main index.php of cake and have connected it to my DB but this page is without styling so before I proceed any further I would like to configure it correctly. What am I doing wrong………. Thanks seanl

    Read the article

  • Best ASP.NET E-Commerce Framework (aspDotNetStoreFront, AbleCommerce, BVCommerce, MediaChase,...)

    - by EfficionDave
    I need a full-featured asp.net E-Commerce solution for a project. It needs to be easily extensible as I need to extend it to integrate with a custom Flash based personalization engine. It should also have a great administrative back-end that handles inventory tracking, report, labels, shipping, ... so that the client can really use it as the basis for their business. I've used quite a few different E-Commerce systems including OSCommerce, Zen Cart, Catalook, and eTailer. While I was impressed with Zen Cart and eTailer, this project is big enough that I want to try a more serious, commercial offering. I'm particularly interested in thoughts on aspdotnetstorefront, BVCommerce, AbleCommerce, and MediaChase. [UPDATE] I've done quite a bit of evaluation since I posted this question and will give some of my findings below: BVCommerce - The price point is nice, the product seems fairly well regarded, and I've heard good things about customizing/extending it, it just doesn't seem like it's going to meet my needs as an Top Tier ECommerce solution. After viewing the tutorial videos, the Admin interface just seems too basic and outdated. aspDotNetStorefront - The integration with DotNetNuke was it's biggest selling point for me but after reading peoples opinions, it's integration seems quite weak. It's sounds like there's a significant learning curve and the architecture just doesn't sound like it's as clean as it should be. AbleCommerce - After a rather thorough evaluation, I have selected AbleCommerce for at least my next project. The Admin interface is beautiful and has a nice list of features. The E-Commerce portion of the user side is very well done, highly skin-able (using Master Pages and a couple other techniques), and the checkout workflow has all the features I need while still being very clean. Source to the API can be bought for $500 but you generally won't need it. There are some 3rd party modules available but there's not currently a large market for these. The biggest glaring weakness of AbleCommerce I've found is their CMS capabilities are very limited and not at all well suited for a non-technical user to make most site content updates. But I have a good solution for that that I plan adapting into AbleCommerce. They will automatically generate a site for you to play with to your heart's content for 30 days. It is defniitely worth checking out.

    Read the article

  • Dojo extending dojo.dnd.Source, move not happening. Ideas?

    - by Soulhuntre
    Hey all, Ok... I have a simple dojo page with the bare essentials. Three UL's with some LI's in them. The idea si to allow drag-n-drop among them but if any UL goes empty due to the last item being dragged out, I will put up a message to the user to gie them some instructions. In order to do that, I wanted to extend the dojo.dnd.Source dijit and add some intelligence. It seemed easy enough. To keep things simple (I am loading Dojo from a CDN) I am simply declating my extension as opposed to doing full on module load. The declaration function is here... function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDndDrop: function(source, nodes, copy){ console.debug('onDndDrop!'); if(this == source){ // reordering items console.debug('moving items from us'); // DO SOMETHING HERE }else{ // moving items to us console.debug('moving items to us'); // DO SOMETHING HERE } console.debug('this = ' + this ); console.debug('source = ' + source ); console.debug('nodes = ' + nodes); console.debug('copy = ' + copy); return dojo.dnd.Source.prototype.onDndDrop.call(this, source, nodes, copy); } }); } I have a init function to use this to decorate the lists... dojo.addOnLoad(function(){ declare_mockupSmartDndUl(); if(dojo.byId('list1')){ //new mockup.SmartDndUl(dojo.byId('list1')); new dojo.dnd.Source(dojo.byId('list1')); } if(dojo.byId('list2')){ new mockup.SmartDndUl(dojo.byId('list2')); //new dojo.dnd.Source(dojo.byId('list2')); } if(dojo.byId('list3')){ new mockup.SmartDndUl(dojo.byId('list3')); //new dojo.dnd.Source(dojo.byId('list3')); } }); It is fine as far as it goes, you will notice I left "list1" as a standard dojo dnd source for testing. The problem is this - list1 will happily accept items from lists 2 & 3 who will move or copy as apprriate. However lists 2 & 3 refuce to accept items from list1. It is as if the DND operation is being cancelled, but the debugger does show the dojo.dnd.Source.prototype.onDndDrop.call happening, and the paramaters do look ok to me. Now, the documentation here is really weak, so the example I took some of this from may be way out of date (I am using 1.4). Can anyone fill me in on what might be the issue with my extension dijit? Thanks!

    Read the article

  • How to GET a read-only vs editable resource in REST style?

    - by Val
    I'm fairly familiar with REST principles, and have read the relevant dissertation, Wikipedia entry, a bunch of blog posts and StackOverflow questions on the subject, but still haven't found a straightforward answer to a common case: I need to request a resource to display. Depending on the resource's state, I need to render either a read-only or an editable representation. In both cases, I need to GET the resource. How do I construct a URL to get the read-only or editable version? If my user follows a link to GET /resource/<id>, that should suffice to indicate to me that s/he needs the read-only representation. But if I need to server up an editable form, what does that URL look like? GET /resource/<id>/edit is obvious, but it contains a verb in the URL. Changing that to GET /resource/<id>/editable solves that problem, but at a seemingly superficial level. Is that all there is to it -- change verbs to adjectives? If instead I use POST to retrieve the editable version, then how do I distinguish between the POST that initially retrieves it, vs the POST that saves it? My (weak) excuse for using POST would be that retrieving an editable version would cause a change of state on the server: locking the resource. But that only holds if my requirements are to implement such a lock, which is not always the case. PUT fails for the same reason, plus PUT is not enabled by default on the Web servers I'm running, so there are practical reasons not to use it (and DELETE). Note that even in the editable state, I haven't made any changes yet; presumably when I submit the resource to the Web server again, I'd POST it. But to get something that I can later POST, the server has to first serve up a particular representation. I guess another approach would be to have separate resources at the collection level: GET /read-only/resource/<id> and GET /editable/resource/<id> or GET /resource/read-only/<id> and GET /resource/editable/<id> ... but that looks pretty ugly to me. Thoughts?

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • What to Return? Error String, Bool with Error String Out, or Void with Exception

    - by Ranger Pretzel
    I spend most of my time in C# and am trying to figure out which is the best practice for handling an exception and cleanly return an error message from a called method back to the calling method. For example, here is some ActiveDirectory authentication code. Please imagine this Method as part of a Class (and not just a standalone function.) bool IsUserAuthenticated(string domain, string user, string pass, out errStr) { bool authentic = false; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; // No exception thrown? We must be good, then. authentic = true; } catch (Exception e) { errStr = e.Message().ToString(); } return authentic; } The advantages of doing it this way are a clear YES or NO that you can embed right in your If-Then-Else statement. The downside is that it also requires the person using the method to supply a string to get the Error back (if any.) I guess I could overload this method with the same parameters minus the "out errStr", but ignoring the error seems like a bad idea since there can be many reasons for such a failure... Alternatively, I could write a method that returns an Error String (instead of using "out errStr") in which a returned empty string means that the user authenticated fine. string AuthenticateUser(string domain, string user, string pass) { string errStr = ""; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } catch (Exception e) { errStr = e.Message().ToString(); } return errStr; } But this seems like a "weak" way of doing things. Or should I just make my method "void" and just not handle the exception so that it gets passed back to the calling function? void AuthenticateUser(string domain, string user, string pass) { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } This seems the most sane to me (for some reason). Yet at the same time, the only real advantage of wrapping those 2 lines over just typing those 2 lines everywhere I need to authenticate is that I don't need to include the "LDAP://" string. The downside with this way of doing it is that the user has to put this method in a try-catch block. Thoughts? Is there another way of doing this that I'm not thinking of?

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • What benefits are there to storing Javascript in external files vs in the <head>?

    - by RenderIn
    I have an Ajax-enabled CRUD application. If I display a record from my database it shows that record's values for each column, including its primary key. For the Ajax actions tied to buttons on the page I am able to set up their calls by printing the ID directly into their onclick functions when rendering the HTML server-side. For example, to save changes to the record I may have a button as follows, with '123' being the primary key of the record. <button type="button" onclick="saveRecord('123')">Save</button> Sometimes I have pages with Javascript generating HTML and Javascript. In some of these cases the primary key is not naturally available at that place in the code. In these cases I took a shortcut and generate buttons like so, taking the primary key from a place it happens to be displayed on screen for visual consumption: ... <td>Primary Key: </td> <td><span id="PRIM_KEY">123</span></td> ... <button type="button" onclick="saveRecord(jQuery('#PRIM_KEY').text())">DoSomething</button> This definitely works, but it seems wrong to drive database queries based on the value of text whose purpose was user consumption rather than method consumption. I could solve this by adding a series of additional parameters to various methods to usher the primary key along until it is eventually needed, but that also seems clunky. The most natural way for me to solve this problem would be to simply situate all the Javascript which currently lives in external files, in the <head> of the page. In that way I could generate custom Javascript methods without having to pass around as many parameters. Other than readability, I'm struggling to see what benefit there is to storing Javascript externally. It seems like it makes the already weak marriage between HTML/DOM and Javascript all the more distant. I've seen some people suggest that I leave the Javascript external, but do set various "custom" variables on the page itself, for example, in PHP: <script type="text/javascript"> var primaryKey = <?php print $primaryKey; ?>; </script> <script type="text/javascript" src="my-external-js-file-depending-on-primaryKey-being-set.js"></script> How is this any better than just putting all the Javascript on the page in the first place? There HTML and Javascript are still strongly dependent on each other.

    Read the article

  • Patterns: Local Singleton vs. Global Singleton?

    - by Mike Rosenblum
    There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out. The pattern is pretty simple, and consists of two parts: A singleton factory, which creates objects based on the arguments passed to the factory method. Objects created by the factory. So far this is just a standard "singleton" pattern or "factory pattern". The issue that I'm asking about, however, is that the singleton factory in this case maintains a set of references to every object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created. When receiving a request for a "new" object, the factory first searches the dictionary to see if an object with the required arguments already exits. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary. This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source. There are probably other reasons to use this pattern, but this is where I've found this useful. My question is: what to call this? In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton. In my own personal terminology, I tend to call the factory method a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality". But these are my own invented terms, and I am not sure that they are good ones. Is there standard terminology for this concept? And if not, could some naming suggestions be made? Thanks in advance...

    Read the article

  • Pass Variables In Inheritance (Obj - C)

    - by Marmik Shah
    I working on a project in Obj-C where i have a base class (ViewController) and a Derived Class (MultiPlayer). Now i have declared certain variables and properties in the base class. My properties are getting accessed from the derived class but im not able to access the variables (int,char and bool type). I'm completely new to Obj-C so i have no clue whats wrong. I have used the data types which are used in C and C++. Is there some specific way to declare variables in Obj-C?? If so, How? Here are my files ViewController.h #import <UIKit/UIKit.h> @interface ViewController : UIViewController @property (weak,nonatomic) IBOutlet UIImageView* backGroungImage; @property (strong,nonatomic) IBOutlet UIImageView *blockView1; @property (strong,nonatomic) IBOutlet UIImageView *blockView2; @property (strong,nonatomic) IBOutlet UIImageView *blockView3; @property (strong,nonatomic) IBOutlet UIImageView *blockView4; @property (strong,nonatomic) IBOutlet UIImageView *blockView5; @property (strong,nonatomic) IBOutlet UIImageView *blockView6; @property (strong,nonatomic) IBOutlet UIImageView *blockView7; @property (strong,nonatomic) IBOutlet UIImageView *blockView8; @property (strong,nonatomic) IBOutlet UIImageView *blockView9; @property (strong,nonatomic) UIImage *x; @property (strong,nonatomic) UIImage *O; @property (strong,nonatomic) IBOutlet UIImageView* back1; @property (strong,nonatomic) IBOutlet UIImageView* back2; @end ViewController.m #import "ViewController.h" @interface ViewController () @end @implementation ViewController int chooseTheBackground = 0; int movesToDecideXorO = 0; int winningArrayX[3]; int winningArrayO[3]; int blocksTotal[9] = {8,3,4,1,5,9,6,7,2}; int checkIfContentInBlocks[9] = {0,0,0,0,0,0,0,0,0}; char determineContentInBlocks[9] = {' ',' ',' ',' ',' ',' ',' ',' ',' '}; bool player1Win = false; bool player2Win = false; bool playerWin = false; bool computerWin = false; - (void)viewDidLoad { [super viewDidLoad]; if(chooseTheBackground==0) { UIImage* backImage = [UIImage imageNamed:@"MainBack1.png"]; _backGroungImage.image=backImage; } if(chooseTheBackground==1) { UIImage* backImage = [UIImage imageNamed:@"MainBack2.png"]; _backGroungImage.image=backImage; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end I am not able to use the above declared variables in my derived classes!

    Read the article

  • Boost MultiIndex - objects or pointers (and how to use them?)?

    - by Sarah
    I'm programming an agent-based simulation and have decided that Boost's MultiIndex is probably the most efficient container for my agents. I'm not a professional programmer, and my background is very spotty. I've two questions: Is it better to have the container contain the agents (of class Host) themselves, or is it more efficient for the container to hold Host *? Hosts will sometimes be deleted from memory (that's my plan, anyway... need to read up on new and delete). Hosts' private variables will get updated occasionally, which I hope to do through the modify function in MultiIndex. There will be no other copies of Hosts in the simulation, i.e., they will not be used in any other containers. If I use pointers to Hosts, how do I set up the key extraction properly? My code below doesn't compile. // main.cpp - ATTEMPTED POINTER VERSION ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> typedef multi_index_container< Host *, indexed_by< // hash by Host::id hashed_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,Host::getID) > // arg errors here > // end indexed_by > HostContainer; ... int main() { ... HostContainer testHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); testHosts.insert( newHostPtr ); ... } I can't find a precisely analogous example in the Boost documentation, and my knowledge of C++ syntax is still very weak. The code does appear to work when I replace all the pointer references with the class objects themselves. As best I can read it, the Boost documentation (see summary table at bottom) implies I should be able to use member functions with pointer elements.

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • Sort Your Emails by Conversation in Outlook 2010

    - by Matthew Guay
    Do you prefer the way Gmail sorts your emails by conversation?  Here’s how you can use this handy feature in Outlook 2010 too. One exciting new feature in Outlook 2010 is the ability to sort and link your emails by conversation.  This makes it easier to know what has been discussed in emails, and helps you keep your inbox more tidy.  Some users don’t like their emails linked into conversations, and in the final release of Outlook 2010 it is turned off by default.  Since this is a new feature, new users may overlook it and never know it’s available.  Here’s how you can enable conversation view and keep your email conversations accessible and streamlined. Activate Conversation View By default, your inbox in Outlook 2010 will look much like it always has in Outlook…a list of individual emails. To view your emails by conversation, select the View tab and check the Show as Conversations box on the top left. Alternately, click on the Arrange By tab above your emails, and select Show as Conversations. Outlook will ask if you want to activate conversation view in only this folder or all folders.  Choose All folders to view all emails in Outlook in conversations. Outlook will now resort your inbox, linking emails in the same conversation together.  Individual emails that don’t belong to a conversation will look the same as before, while conversations will have a white triangle carrot on the top left of the message title.  Select the message to read the latest email in the conversation. Or, click the triangle to see all of the messages in the conversation.  Now you can select and read any one of them. Most email programs and services include the previous email in the body of an email when you reply.  Outlook 2010 can recognize these previous messages as well.  You can navigate between older and newer messages from popup Next and Previous buttons that appear when you hover over the older email’s header.  This works both in the standard Outlook preview pane and when you open an email in its own window.   Edit Conversation View Settings Back in the Outlook View tab, you can tweak your conversation view to work the way you want.  You can choose to have Outlook Always Expand Conversations, Show Senders Above the Subject, and to Use Classic Indented View.  By default, Outlook will show messages from other folders in the conversation, which is generally helpful; however, if you don’t like this, you can uncheck it here.  All of these settings will stay the same across all of your Outlook accounts. If you choose Indented View, it will show the title on the top and then an indented message entry underneath showing the name of the sender. The Show Senders Above the Subject view makes it more obvious who the email is from and who else is active in the conversation.  This is especially useful if you usually only email certain people about certain topics, making the subject lines less relevant. Or, if you decide you don’t care for conversation view, you can turn it off by unchecking the box in the View tab as above. Conclusion Although it may take new users some time to get used to, conversation view can be very helpful in keeping your inbox organized and letting important emails stay together.  If you’re a Gmail user syncing your email account with Outlook, you may find this useful as it makes Outlook 2010 work more like Gmail, even when offline. If you’d like to sync your Gmail account with Outlook 2010, check out our articles on syncing it with POP3 and IMAP. Similar Articles Productive Geek Tips Automatically Move Daily Emails to Specific Folders in OutlookQuickly Clean Your Inbox in Outlook 2003/2007Find Emails With Attachments with Outlook 2007’s Instant SearchAdd Your Gmail Account to Outlook 2010 using POPSchedule Auto Send & Receive in Microsoft Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox)

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >