Search Results

Search found 1402 results on 57 pages for 'linking'.

Page 45/57 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • VPN with VLANs? [closed]

    - by Craig
    As usual, I'm sure I'm in way over my head on this one. My networking skills are limited; so, bear with me if you will. What I have are a few testing servers at my house as well as at a friends house that I want to link together so they can see each other (VPN right? I've done those before). We want to be able to see all the servers and work with them from either location. All the servers also need to be able to see each other. But, we don't want to see each others PCs, printers, PS3s etc. How do we pull that trick off? Multiple VLAN?... subnets?... what? If hardware matters, I have an old PC I was planning on loading pfSense onto because my current el-cheapo router doesn't support VPN. The VPN linking the houses is about the only thing I'm sure on. Beyond that, I'm lost. I'm not a complete noob; but, like I said, I'm not so sharp with the more complex networking. I do however read well... So use lots of descriptive words and feel free to link away to long dry articles if necessary. :-)

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • How do I get Apache 2 to read this directory?

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Using Google Voice with an internal SIP Server

    - by BHelman
    Let me be upfront and say first that I am new to the entire details of VoIP. My former understanding was just the extent of Skype. Don't worry, I understand a lot more of it now. The situation is this. I have a Google number that is actually very close to the area in which I live. It's convenient as it is not long distance for everyone. I love its features and etc, but I want it to forward to a VoIP phone, which will be my residential phone. Obviously, Google does not allow forwarding calls to domains (yet). So I use SIPGate with a SIPGate number to forward to a softphone for now. I can configure a VoIP phone to interact with my account easily enough. The problem lies with SIPGate itself really. Google Voice gives free unlimited inbound and outbound calling. SIPGate charges you for outbound. So a VoIP phone would work, but I could never make a call on it (for free). So let's say I setup an Asterisk server, or any other SIP server. What is the best way to go about linking my server to Google Voice? I looked into IPKall but it only specifies inbound calling and not outbound. Or is that just assumed? Does an SIP server handle outbound calling by itself?

    Read the article

  • Referer is passed from HTTPS to HTTP in some cases... How?

    - by ravisorg
    In theory browsers do not pass on referer information from HTTPS to HTTP sites. And in my experience this has always been true. But I just found an exception, and I want to understand why it works so I can use it as well. Search for "what is my referer" on https://www.google.ca/ eg: https://www.google.ca/search?q=what+is+my+referer There are a few sites that will show referer. They all seem to "work" when they shouldn't. For example, click the www.whatismyreferer.com one. I get: Your referer: https://www.google.ca/ Note that sometimes, rarely, I get "no referer" as the result. Go back and click the link again and it'll "work" the next time. This should not happen. www.whatismyreferer.com is a non-HTTPS site. The referer header should not be being passed, but it is. What's going on here, and how can I do the same from my HTTPS site to the HTTP sites I'm linking to?

    Read the article

  • Changing the name of a binary packaged application and its evoking command

    - by jerkstore
    I have taken the source code of a large project, App A, and made many modifications to it to produce my version, App B. Both App A and App B compile cleanly on Debian and Red Hat and now I would like to build binary packages for both platforms. The last modification I need to make is ensuring App B can be installed alongside App A without any interference. I should be able to evoke both application-a and application-b in the terminal and have both be listed as separate software in whatever desktop environment is present. The projects have a debian/ folder (containing rules, control, etc.) and an rpm/ folder containing a SPEC file. Currently, building and installing the .rpm and .deb packages works except that App B is recognized as App A and therefore does not meet the aforementioned requirements. ldd shows the programs have the same exact dependencies and I am not able to pursue static linking of libraries. What modifications do I need to make to my project to achieve the desired outcome? Please be specific as I do not have much experience with the packaging process.

    Read the article

  • Problems installing Memcache (PECL extension)

    - by Petrus
    I have installed memcached fine, and now I will need to install PECL extension memcache. Im running RedHat x86_64 es5. The installation gives me this: downloading memcache-2.2.6.tgz ... Starting to download memcache-2.2.6.tgz (35,957 bytes) ..........done: 35,957 bytes 11 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 Enable memcache session handler support? [yes] : Notice: Use of undefined constant STDIN - assumed 'STDIN' in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in /usr/lib/php/PEAR/Frontend/CLI.php on line 304 building in /root/tmp/pear-build-root/memcache-2.2.6 running: /root/tmp/pear/memcache/configure --enable-memcache-session=yes checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcache support... yes, shared checking whether to enable memcache session handler support... yes checking for the location of ZLIB... no checking for the location of zlib... /usr checking for session includes... /usr/include/php checking for memcache session support... enabled checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 98304 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -o memcache.lo mkdir .libs cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -fPIC -DPIC -o .libs/memcache.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -o memcache_queue.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -fPIC -DPIC -o .libs/memcache_queue.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -o memcache_standard_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -fPIC -DPIC -o .libs/memcache_standard_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -o memcache_consistent_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -fPIC -DPIC -o .libs/memcache_consistent_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -o memcache_session.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -fPIC -DPIC -o .libs/memcache_session.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=link cc -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -o memcache.la -export-dynamic -avoid-version -prefer-pic -module -rpath /root/tmp/pear-build-root/memcache-2.2.6/modules memcache.lo memcache_queue.lo memcache_standard_hash.lo memcache_consistent_hash.lo memcache_session.lo cc -shared .libs/memcache.o .libs/memcache_queue.o .libs/memcache_standard_hash.o .libs/memcache_consistent_hash.o .libs/memcache_session.o -Wl,-soname -Wl,memcache.so -o .libs/memcache.so creating memcache.la (cd .libs && rm -f memcache.la && ln -s ../memcache.la memcache.la) /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=install cp ./memcache.la /root/tmp/pear-build-root/memcache-2.2.6/modules cp ./.libs/memcache.so /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.so cp ./.libs/memcache.lai /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.la PATH="$PATH:/sbin" ldconfig -n /root/tmp/pear-build-root/memcache-2.2.6/modules ---------------------------------------------------------------------- Libraries have been installed in: /root/tmp/pear-build-root/memcache-2.2.6/modules If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- Build complete. Don't forget to run 'make test'. running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-memcache-2.2.6" install Installing shared extensions: /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-memcache-2.2.6" | xargs ls -dils 361232 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6 361263 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr 361264 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib 361265 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php 361266 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions 361267 4 drwxr-xr-x 2 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626 361262 236 -rwxr-xr-x 1 root root 235575 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.6 Extension memcache enabled in php.ini The memcache.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 I tried as well to install this extension "memcached 1.0.2 (PHP extension for interfacing with memcached via libmemcached library)" but it failed: downloading memcached-1.0.2.tgz ... Starting to download memcached-1.0.2.tgz (22,724 bytes) ........done: 22,724 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /root/tmp/pear-build-root/memcached-1.0.2 running: /root/tmp/pear/memcached/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcached support... yes, shared checking for libmemcached... yes, shared checking whether to enable memcached session handler support... yes checking whether to enable memcached igbinary serializer support... no checking for ZLIB... yes, shared checking for zlib location... /usr checking for session includes... /usr/include/php checking for memcached session support... enabled checking for memcached igbinary support... disabled checking for libmemcached location... configure: error: memcached support requires libmemcached. Use --with-libmemcached-dir= to specify the prefix where libmemcached headers and library are located ERROR: `/root/tmp/pear/memcached/configure' failed The memcached.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Is there a kind soul out there that can solve this puzzle?

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Designing an email system to guarantee delivery

    - by GlenH7
    We are looking to expand our use of email for notification purposes. We understand it will generate more inbox volume, but we are being selective about which events we fire notification on in order to keep the signal-to-noise ratio high. The big question we are struggling with is designing a system that guarantees that the email was delivered. If an email isn't delivered, we will consider that an exception event that needs to be investigated. In reality, I say almost guarantees because there aren't any true guarantees with email. We're just looking for a practical solution to making sure the email got there and experiences others have had with the various approaches to guaranteeing delivery. For the TL;DR crowd - how do we go about designing a system to guarantee delivery of emails? What techniques should we consider so we know the emails were delivered? Our biggest area of concern is what techniques to use so that we know when a message is sent out that it either lands in an inbox or it failed and we need to do something else. Additional requirements: We're not at the stage of including an escalation response, but we'll want that in the future or so we think. Most notifications will be internal to our enterprise, but we will have some notifications being sent to external clients. Some of our application is in a hosted environment. We haven't determined if those servers can access our corporate email servers for relaying or if they'll be acting as their own mail servers. Base design / modules (at the moment): A module to assign tracking identification A module to send out emails A module to receive delivery notification (perhaps this is the same as the email module) A module that checks sent messages against delivery notification and alerts on undelivered email. Some references: Atwood: Send some email Email Tracking Some approaches: Request a response (aka read-receipt or Message Disposition Notification). Seems prone to failure since we have cross-compatibility issues due to differing mail servers and software. Return receipt (aka Delivery Status Notification). Not sure if all mail servers honor this request or not Require an action and therefore prove reply. Seems burdensome to force the recipients to perform an additional task not related to resolving the issue. And no, we haven't come up with a way of linking getting the issue fixed to whether or not the email was received. Force a click-through / Other site sign-in. Similar to requiring some sort of action, this seems like an additional burden and will annoy the users. On the other hand, it seems the most likely to guarantee someone received the notification. Hidden image tracking. Not all email providers automatically load the image, and how would we associate the image(s) with the email tracking ID? Outsource delivery. This gets us out of the email business, but goes back to how to guarantee the out-sourcer's receipt and subsequent delivery to the end recipient. As a related concern, there will be an n:n relationship between issue notification and recipients. The 1 issue : n recipients subset isn't as much of a concern although if we had a delivery failure we would want to investigate and fix the core issue. Of bigger concern is n issues : 1 recipient, and we're specifically concerned in making sure that all n issues were received by the recipient. How does forum software or issue tracking software handle this requirement? If a tracking identifier is used, Where is it placed in the email? In the Subject, or the Body?

    Read the article

  • List of blogs - year 2010

    - by hajan
    This is the last day of year 2010 and I would like to add links to all blogs I have posted in this year. First, I would like to mention that I started blogging in ASP.NET Community in May / June 2010 and have really enjoyed writing for my favorite technologies, such as: ASP.NET, jQuery/JavaScript, C#, LINQ, Web Services etc. I also had great feedback either through comments on my blogs or in Twitter, Facebook, LinkedIn where I met many new experts just as a result of my blog posts. Thanks to the interesting topics I have in my blog, I became DZone MVB. Here is the list of blogs I made in 2010 in my ASP.NET Community Weblog: (newest to oldest) Great library of ASP.NET videos – Pluralsight! NDepend – Code Query Language (CQL) NDepend tool – Why every developer working with Visual Studio.NET must try it! jQuery Templates in ASP.NET - Blogs Series jQuery Templates - XHTML Validation jQuery Templates with ASP.NET MVC jQuery Templates - {Supported Tags} jQuery Templates – tmpl(), template() and tmplItem() Introduction to jQuery Templates ViewBag dynamic in ASP.NET MVC 3 - RC 2 Today I had a presentation on "Deep Dive into jQuery Templates in ASP.NET" jQuery Data Linking in ASP.NET How do you prefer getting bundles of technologies?? Case-insensitive XPath query search on XML Document in ASP.NET jQuery UI Accordion in ASP.NET MVC - feed with data from database (Part 3) jQuery UI Accordion in ASP.NET WebForms - feed with data from database (Part 2) jQuery UI Accordion in ASP.NET – Client side implementation (Part 1) Using Images embedded in Project’s Assembly Macedonian Code Camp 2010 event has finished successfully Tips and Tricks: Deferred execution using LINQ Using System.Diagnostics.Stopwatch class to measure the elapsed time Speaking at Macedonian Code Camp 2010 URL Routing in ASP.NET 4.0 Web Forms Conflicts between ASP.NET AJAX UpdatePanels & jQuery functions Integration of jQuery DatePicker in ASP.NET Website – Localization (part 3) Why not to use HttpResponse.Close and HttpResponse.End Calculate Business Days using LINQ Get Distinct values of an Array using LINQ Using CodeRun browser-based IDE to create ASP.NET Web Applications Using params keyword – Methods with variable number of parameters Working with Code Snippets in VS.NET  Working with System.IO.Path static class Calculating GridView total using JavaScript/JQuery The new SortedSet<T> Collection in .NET 4.0 JavaScriptSerializer – Dictionary to JSON Serialization and Deserialization Integration of jQuery DatePicker in ASP.NET Website – JS Validation Script (part 2) Integration of jQuery DatePicker in ASP.NET Website (part 1) Transferring large data when using Web Services Forums dedicated to WebMatrix Microsoft WebMatrix – Short overview & installation Working with embedded resources in Project's assembly Debugging ASP.NET Web Services Save and Display YouTube Videos on ASP.NET Website Hello ASP.NET World... In addition, I would like to mention that I have big list of blog posts in CodeASP.NET Community (total 60 blogs) and the local MKDOT.NET Community (total 61 blogs). You may find most of my weblogs.asp.net/hajan blogs posted there too, but there you can find many others. In my blog on MKDOT.NET Community you can find most of my ASP.NET Weblog posts translated in Macedonian language, some of them posted in English and some other blogs that were posted only there. By reading my blogs, I hope you have learnt something new or at least have confirmed your knowledge. And also, if you haven't, I encourage you to start blogging and share your Microsoft Tech. thoughts with all of us... Sharing and spreading knowledge is definitely one of the noblest things which we can do in our life. "Give a man a fish and he will eat for a day. Teach a man to fish and he will eat for a lifetime" HAPPY NEW 2011 YEAR!!! Best Regards, Hajan

    Read the article

  • Subterranean IL: Pseudo custom attributes

    - by Simon Cooper
    Custom attributes were designed to make the .NET framework extensible; if a .NET language needs to store additional metadata on an item that isn't expressible in IL, then an attribute could be applied to the IL item to represent this metadata. For instance, the C# compiler uses DecimalConstantAttribute and DateTimeConstantAttribute to represent compile-time decimal or datetime constants, which aren't allowed in pure IL, and FixedBufferAttribute to represent fixed struct fields. How attributes are compiled Within a .NET assembly are a series of tables containing all the metadata for items within the assembly; for instance, the TypeDef table stores metadata on all the types in the assembly, and MethodDef does the same for all the methods and constructors. Custom attribute information is stored in the CustomAttribute table, which has references to the IL item the attribute is applied to, the constructor used (which implies the type of attribute applied), and a binary blob representing the arguments and name/value pairs used in the attribute application. For example, the following C# class: [Obsolete("Please use MyClass2", true)] public class MyClass { // ... } corresponds to the following IL class definition: .class public MyClass { .custom instance void [mscorlib]System.ObsoleteAttribute::.ctor(string, bool) = { string('Please use MyClass2' bool(true) } // ... } and results in the following entry in the CustomAttribute table: TypeDef(MyClass) MemberRef(ObsoleteAttribute::.ctor(string, bool)) blob -> {string('Please use MyClass2' bool(true)} However, there are some attributes that don't compile in this way. Pseudo custom attributes Just like there are some concepts in a language that can't be represented in IL, there are some concepts in IL that can't be represented in a language. This is where pseudo custom attributes come into play. The most obvious of these is SerializableAttribute. Although it looks like an attribute, it doesn't compile to a CustomAttribute table entry; it instead sets the serializable bit directly within the TypeDef entry for the type. This flag is fully expressible within IL; this C#: [Serializable] public class MySerializableClass {} compiles to this IL: .class public serializable MySerializableClass {} For those interested, a full list of pseudo custom attributes is available here. For the rest of this post, I'll be concentrating on the ones that deal with P/Invoke. P/Invoke attributes P/Invoke is built right into the CLR at quite a deep level; there are 2 metadata tables within an assembly dedicated solely to p/invoke interop, and many more that affect it. Furthermore, all the attributes used to specify p/invoke methods in C# or VB have their own keywords and syntax within IL. For example, the following C# method declaration: [DllImport("mscorsn.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.U1)] private static extern bool StrongNameSignatureVerificationEx( [MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, [MarshalAs(UnmanagedType.U1)] bool fForceVerification, [MarshalAs(UnmanagedType.U1)] ref bool pfWasVerified); compiles to the following IL definition: .method private static pinvokeimpl("mscorsn.dll" lasterr winapi) bool marshal(unsigned int8) StrongNameSignatureVerificationEx( string marshal(lpwstr) wszFilePath, bool marshal(unsigned int8) fForceVerification, bool& marshal(unsigned int8) pfWasVerified) cil managed preservesig {} As you can see, all the p/invoke and marshal properties are specified directly in IL, rather than using attributes. And, rather than creating entries in CustomAttribute, a whole bunch of metadata is emitted to represent this information. This single method declaration results in the following metadata being output to the assembly: A MethodDef entry containing basic information on the method Four ParamDef entries for the 3 method parameters and return type An entry in ModuleRef to mscorsn.dll An entry in ImplMap linking ModuleRef and MethodDef, along with the name of the function to import and the pinvoke options (lasterr winapi) Four FieldMarshal entries containing the marshal information for each parameter. Phew! Applying attributes Most of the time, when you apply an attribute to an element, an entry in the CustomAttribute table will be created to represent that application. However, some attributes represent concepts in IL that aren't expressible in the language you're coding in, and can instead result in a single bit change (SerializableAttribute and NonSerializedAttribute), or many extra metadata table entries (the p/invoke attributes) being emitted to the output assembly.

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • GCC 4.2.1 Compiling on Cygwin(Win7 64bit) for iPhone [closed]

    - by Kenneth Noland
    Hey This is going to take a long while to explain, but the short version is that I am currently attempting to compile the LLVM GCC frontend for ARMv7 to compile apps for the Cortex-A8(iPhone 3GS). I'm running into an error from LD when compiling libgcc(part of the gcc compilation process) that has been driving me mad! The command is this: /usr/llvm-gcc-4.2-2.8.source/build/./gcc/xgcc \ -B/usr/llvm-gcc-4.2_2.8.source/build/./gcc \ -B/usr/local/arm-apple-darwin/bin \ -B/usr/local/arm-apple-darwin/lib \ -isystem /usr/local/arm-apple-darwin/include \ -isystem /usr/local/arm-apple-darwin/sys-include \ -O2 -g -W -Wall -Wwrite-strings -wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-inline -dynamiclib -nodefaultlibs -W1,-dead_strip \ -marm \ -install_name /usr/local/arm-apple-darwin/lib/libgcc_s.1.dylib \ -single_module -o ./libgcc_s.1.dylib.tmp \ -W1,-exported_symbols_list,libgcc/./libgcc.map -compatibility_version 1 -current_version 1.0 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc \ ... long list of .o files ... \ -lc And the result is typically a lot of undefined references to malloc, free, exit, etc. which typically indicate that libc is not getting compiled in. After going through the list of errors that ld is throwing, I see at the top that it is attempting to pull in /usr/lib/libc.a and complains that it is not the correct platform. Okay, that makes sense, so I spent 5 minutes on google and found an answer. Turns out that if I copy the libSystem.dylib and rename it to libc.dylib, that should solve the problem, but it doesn't. I couldn't find a copy of that file on my phone, so I pulled it directly from the SDK. I then get this strange error: ld64: in /usr/local/arm-apple-darwin/lib/libc.dylib, can't re-map file, errno=22 At this point, I did everything I could think of. I grabbed a fresh copy of my /usr/lib folder from my iphone and confirmed that libSystem.dylib(and libSystem.B.dylib) wasn't there. I unpacked the raw .ipsw package for iOS 4.2.1 and once again, I could not find a copy of libSystem.dylib there either. I unpacked the iPhoneSDK and MacOS SDK and I managed to find a copy of it in both, but that error just kept persisting. I copied libSystem.dylib, libSystem.B.dylib, tried all sorts of combinations of renaming to libc.dylib and still nothing but errors. I can't find a way to get it to recognize the file and link against it. I also tried linking against the libc.a located in the iphone SDK and that didn't work either. I checked what ./xgcc was firing off, and it was my freshly built copy of arm-apple-darwin-ld64 which should be fine. A little bit of background here. I built LLVM+Clang 2.8 with no errors, and I rebuilt the ODCCTools with some light modifications to get it to compile on Cygwin(I'll post my changes in a patch along with a tutorial if I can get this to work). I also grabbed the iphone-dev "includes" and "csu" project and those completed successfully, although there really is no point to them since I can't get it to link against crt0.a. I'm running out of ideas here. Can anyone help me out on this?

    Read the article

  • Preview Before You Paste with Live Preview in Office 2010

    - by DigitalGeekery
    Do you often find yourself frustrated that content you just copied and pasted didn’t turn out the way you expected? With the new Live Preview in Office 2010, you can preview how copied content will look when it’s pasted even between Office applications. Not every paste preview option will be available in every circumstance. The available options will be based on the applications being used and what content is copied. Copy your content like normal by right-clicking and selecting Copy, pressing Crtl + C, or selecting Copy from the Home tab. Next, select your location to paste the content. Now you can access the Paste Preview buttons either by selecting the Paste dropdown list from the Home tab…   …Or by right-clicking. As you hover your cursor over each of the Paste Options buttons, you will see a preview of what it will look like if you paste using that option. Click the corresponding button when you find the paste option you like. The “Paste” will paste all the content and formatting as you can see below. Values will paste values only, no formatting.   Formatting will paste only the formatting, no values. Hover over Paste Special to reveal any additional paste options. The process is similar in other Office applications. As you can see in the Word document below, Keep Text Only will paste the text, but not the orange color format from the original text.   Even after you’ve pasted, there is still time to change your mind. After you paste content you’ll see a Paste Option button near your content. If you don’t, you can pull it up by pressing the Ctrl key. Note: This is also available after using Ctrl + V to paste. Click to enable the dropdown and select one of the available options.   Using Live Paste Preview between multiple applications is just as easy. If we preview pasting the content from our Word document into PowerPoint by using the Keep Source Formatting option, we’ll see that the outcome looks awful. Selecting the Use Destination Theme will merge the text into the theme of the PowerPoint document and looks a lot better on our slide.   Live Paste Preview is a nice addition to Office 2010 and is sure to save time spent undoing the unexpected consequences of pasting content. Looking for more Office 2010 tips? Check out some of our other Office 2010 posts like how to create a customized tab on the Office 2010 ribbon, and how to use the streamlined printing features in Office 2010. Similar Articles Productive Geek Tips Edit Microsoft Word 2007 Documents in Print PreviewPreview Documents Without Opening Them In Word 2007How to See Where a TinyUrl Is Really Linking ToHow To Upload Office 2010 Documents to Web Apps Technical PreviewPreview Links and Images in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Building a project in VS that depends on a static and dynamic library

    - by fg nu
    Noob noobin'. I would appreciate some very careful handholding in setting up an example in Visual Studio 2010 Professional where I am trying to build a project which links: a previously built static library, for which the VS project folder is "C:\libjohnpaul\" a previously built dynamic library, for which the VS project folder is "C:\libgeorgeringo\" These are listed as Recipes 1.11, 1.12 and 1.13 in the C++ Cookbook. The project fails to compile for me with unresolved dependencies (see details below), and I can't figure out why. Project 1: Static Library The following are the header and source files that were compiled in this project. I was able to compile this project fine in VS2010, to the named standard library "libjohnpaul.lib" which lives in the folder ("C:/libjohnpaul/Release/"). // libjohnpaul/john.hpp #ifndef JOHN_HPP_INCLUDED #define JOHN_HPP_INCLUDED void john( ); // Prints "John, " #endif // JOHN_HPP_INCLUDED // libjohnpaul/john.cpp #include <iostream> #include "john.hpp" void john( ) { std::cout << "John, "; } // libjohnpaul/paul.hpp #ifndef PAUL_HPP_INCLUDED #define PAUL_HPP_INCLUDED void paul( ); // Prints " Paul, " #endif // PAUL_HPP_INCLUDED // libjohnpaul/paul.cpp #include <iostream> #include "paul.hpp" void paul( ) { std::cout << "Paul, "; } // libjohnpaul/johnpaul.hpp #ifndef JOHNPAUL_HPP_INCLUDED #define JOHNPAUL_HPP_INCLUDED void johnpaul( ); // Prints "John, Paul, " #endif // JOHNPAUL_HPP_INCLUDED // libjohnpaul/johnpaul.cpp #include "john.hpp" #include "paul.hpp" #include "johnpaul.hpp" void johnpaul( ) { john( ); paul( ); Project 2: Dynamic Library Here are the header and source files for the second project, which also compiled fine with VS2010, and the "libgeorgeringo.dll" file lives in the directory "C:\libgeorgeringo\Debug". // libgeorgeringo/george.hpp #ifndef GEORGE_HPP_INCLUDED #define GEORGE_HPP_INCLUDED void george( ); // Prints "George, " #endif // GEORGE_HPP_INCLUDED // libgeorgeringo/george.cpp #include <iostream> #include "george.hpp" void george( ) { std::cout << "George, "; } // libgeorgeringo/ringo.hpp #ifndef RINGO_HPP_INCLUDED #define RINGO_HPP_INCLUDED void ringo( ); // Prints "and Ringo\n" #endif // RINGO_HPP_INCLUDED // libgeorgeringo/ringo.cpp #include <iostream> #include "ringo.hpp" void ringo( ) { std::cout << "and Ringo\n"; } // libgeorgeringo/georgeringo.hpp #ifndef GEORGERINGO_HPP_INCLUDED #define GEORGERINGO_HPP_INCLUDED // define GEORGERINGO_DLL when building libgerogreringo.dll # if defined(_WIN32) && !defined(__GNUC__) # ifdef GEORGERINGO_DLL # define GEORGERINGO_DECL _ _declspec(dllexport) # else # define GEORGERINGO_DECL _ _declspec(dllimport) # endif # endif // WIN32 #ifndef GEORGERINGO_DECL # define GEORGERINGO_DECL #endif // Prints "George, and Ringo\n" #ifdef __MWERKS__ # pragma export on #endif GEORGERINGO_DECL void georgeringo( ); #ifdef __MWERKS__ # pragma export off #endif #endif // GEORGERINGO_HPP_INCLUDED // libgeorgeringo/ georgeringo.cpp #include "george.hpp" #include "ringo.hpp" #include "georgeringo.hpp" void georgeringo( ) { george( ); ringo( ); } Project 3: Executable that depends on the previous libraries Lastly, I try to link the aforecompiled static and dynamic libraries into one project called "helloBeatlesII" which has the project directory "C:\helloBeatlesII" (note that this directory does not nest the other project directories). The linking process that I did is described below: To the "helloBeatlesII" solution, I added the solutions "libjohnpaul" and "libgeorgeringo"; then I changed the properties of the "helloBeatlesII" project to additionally point to the include directories of the other two projects on which it depends ("C:\libgeorgeringo\libgeorgeringo" & "C:\libjohnpaul\libjohnpaul"); added "libgeorgeringo" and "libjohnpaul" to the project dependencies of the "helloBeatlesII" project and made sure that the "helloBeatlesII" project was built last. Trying to compile this project gives me the following unsuccessful build: 1------ Build started: Project: helloBeatlesII, Configuration: Debug Win32 ------ 1Build started 10/13/2012 5:48:32 PM. 1InitializeBuildStatus: 1 Touching "Debug\helloBeatlesII.unsuccessfulbuild". 1ClCompile: 1 helloBeatles.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl georgeringo(void)" (?georgeringo@@YAXXZ) referenced in function _main 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl johnpaul(void)" (?johnpaul@@YAXXZ) referenced in function _main 1E:\programming\cpp\vs-projects\cpp-cookbook\helloBeatlesII\Debug\helloBeatlesII.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:01.34 ========== Build: 0 succeeded, 1 failed, 2 up-to-date, 0 skipped ========== At this point I decided to call in the cavalry. I am new to VS2010, so in all likelihood I am missing something straightforward.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >