Search Results

Search found 2871 results on 115 pages for 'ios6 maps'.

Page 105/115 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • How to grow to be global sysadmin of an organization?

    - by user64729
    Bit of a non-technical question but I have seen questions of the career development type on here before so hopefully it is fine. I work for a fast growing but still small organization (~65 employees). I have been their external sysadmin for a while now, looking after hosted Linux servers and infrastructure. In the past 12 months I have been transforming into the internal sysadmin for our office too. I'm currently studying Cisco CCNA to cover the demands of being an internal sysadmin and looking after the office LAN, routers, switches and VPNs. Now they want me to look after the global sysadmin function of the organization as a whole. The organization has 3 offices in total, 2 in the UK and 1 in the US. I work in one of the UK offices. The other offices are primarily Windows desktops with AD domain shops. My office is primarily a Linux shop with a file-server and NFS/NIS (no AD domain for the Windows desktops yet but it's in the works). Each other office has a sysadmin which in theory I am supposed to supervise but in reality each is independent. I have a very competent junior sysadmin working with me who shares the day-to-day tasks and does some of the longer term projects with my supervision. My boss has asked me how to grow from being the external sysadmin to the global sysadmin. I am to ponder this and then report back to him on how to achieve this. My current thoughts are: Management training or professional development - eg. reading books such as "Influencer" and "7 Habits". Also I feel I should take steps to improving communication skills since a senior person is expected to talk and speak out more often. Learn more about Windows and Active Directory - I'm an LPI-certified guy and have a lot of experience in Linux (Ubuntu or desktop, Debian/Ubuntu as server). Since the other offices are mainly Windows-domains it makes sense to skill-up in that area so I can understand what the other admins are talking about. Talk to previous colleagues who have are are in this role already - to try and get the benefit of their experience. Produce an "IT Roadmap" or similar that maps out where we want the organization to be and when, plotted out over the next couple of years with regards to internal and external infrastructure. I have produced a "Security roadmap" already which does cover some of these things. I guess this can summed up as "thinking more strategically"? I'd appreciate comments from anyone who has been through a similar situation, thanks.

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • samba 3.5 "force user" doesn't seem to be sticking

    - by myCubeIsMyCell
    After installing a new OS with newer version of samba, I'm having trouble accessing my shares. I can browse to the specific share, but only to the top level. As best I can tell from the logs, it seems the "force user" in the samba config isn't sticking beyond the initial connection. Details below. I installed a new version of CentOS on my storage server. My old CentOS (4?)install had samba version 3.0.33, new CentOS is using 3.5.10. No domain/AD involved ... just home workgroup. no real security... just some shares hidden & some defined as read-only. here's my config: [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = luna security = share # logs split per machine log file = /var/log/samba/log.%m log level = 2 # max 50KB per log file, then rotate max log size = 50 winbind use default domain = Yes [strge] comment = please path = /storage browseable = yes read only = no force user = windowsguest force group = users guest ok = yes So... the problem I'm running into is that the 'force user' only seems to hold for the initial connection & I see all the top level folders fine. When I drill into a folder I get access denied - which appears to be due to my windows user info being sent (trys to authenticate xuser - a non-existant user to samba, so maps to nobody & fails). Here's the smb error msg: [2012/11/29 14:30:27.326195, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [xuser] -> [xuser] FAILED with error NT_STATUS_NO_SUCH_USER [2012/11/29 14:30:27.326251, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [nobody] -> [nobody] FAILED with error NT_STATUS_NO_SUCH_USER Most of the top level directories are 755, some 777. Either way, can not access them. If I do a chown -R windowsguest.users ... no change... but if I do a chmod -R to 777 or 755 they become browsable... but still can't create files (even for 777 ones). Not sure what role it plays if any... but had to recreate the user windowsguest under the new os install, uid & gid match old user. Seems the main issue as far as I can tell is that samba isn't maintaining the 'force user' - but I could be wildly off base. Client OS is win7 pro x64. Thanks for any suggestions or advice!

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • debian gateway using iptables

    - by meijuh
    I am having problems setting up a debian gateway server. My goal: Having eth1 the WAN interface. Having eth0 the LAN interface. Allow both ports 22 (SSH) and 80 (HTTP) accessed from the outside world on the gateway (SSH and HTTP run on this server). What I did was the following: Create a file /etc/iptables.rules with contents: /etc/iptables.rules: *nat -A POSTROUTING -o eth1 -j MASQUERADE COMMIT *filter -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth1 -j DROP COMMIT edit /etc/network/interfaces as follows: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.rules auto eth0 allow-hotplug eth0 iface eth0 inet dhcp #auto eth1 #allow-hotplug eth1 #iface eth1 inet dhcp allow-hotplug eth1 iface eth1 inet static address 217.119.224.51 netmask 255.255.255.248 gateway 217.119.224.49 dns-nameservers 217.119.226.67 217.119.226.68 Uncomment the rule net.ipv4.ip_forward=1 in /etc/sysctl.conf to allow packet forwarding. The static settings for eth1 such as the ip address I got from my router (which I want to replace); I simply copied these. I have a (windows) DNS + DHCP server on ip address 10.180.1.10, which assigns ip address 10.180.1.44 to eth0. What this server does is not really interesting it only maps domain names on our local network and assigns one static ip to the gateway. What works: on the gateway itself I can ping 8.8.8.8 and google.nl. So that is okey. What does not work: (1) Every machine connected to eth0 (indirectly via a switch) can not ping an ip or a domain. So I guess the gateway can not be found. (2) Also when I configure my linux machine (a laptop) to use a static ip 10.180.1.41, a mask and a gateway (10.180.1.44) I can not ping an ip or domain either. This means that maybe my iptables is incorrect of not loaded correctly. Or I maybe have to configure my DNS/DHCP on my windows machine. I have not reset the windows machine net, restart the DNS/DHCP services, should I do this? I did not install dnsmasq as desribed here: http://blog.noviantech.com/2010/12/22/debian-router-gateway-in-15-minutes/. I don't think this is necessary?

    Read the article

  • What presentation software suits my needs?

    - by claws
    Background: I'm teaching biology to 12th grade students. The syllabus I'm teaching is huge. I mean literally, very huge. There is a lot for students to remember. There are no less than 1000 facts (weird names, dates etc) for students to remember. They'll have to remember all of them, they don't have a choice. The notes I compiled for their learning itself is upto 80 printed pages(Just the bullet outline & facts). That's just one chapter. We have 34 chapters. Also my students are very hardworking, they study upto 8-10hrs per day (Yeah! we are from India :). So, I want to ensure maximum retaining by the students at each and every stage (Teaching & Learning). I'm trying to as many memory training techniques as possible. I'm trying to incorporate, mnemonics, strong visual aids (pictures, 3D-animations, real videos etc.), spaced repetition etc. I think MS powerpoint is not suitable for my needs: There are about 200 slides per chapter. Its very easy for students to get lost while teaching. Because the problem with powerpoint is that it gives facts (as bullets) but it doesn't exploit the association & organization (Concept Map) of the content, which helps students learn quickly. I found an amazing software called XMind. You can see the screenshot here. Problem is that it is not as powerpoint in terms of powerpoint. This software can be used for just for concept maps. In the above screenshot, each topic occupies a single slide. I have an Image/picture(Detailed huge picture) and about 5-10 bullet points and probably a video or an animation of somethings. And this XMind is not good at presenting, in terms that it doesn't allow me to set what to present after what. I want to present a top down view, with a slide for each topic. PS: I Don't like prezi.com. I tried but it simply is too confusing for my students. It zooms here and there. I didn't tried it but I've seen few presentations.

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • How to resolve `bootpd` crashing constantly on Mac OS X 10.6.4 Snow Leopard Server?

    - by morgant
    I've got a Mac Pro running Mac OS X 10.6.4 Snow Leopard Server and it's recently started getting numerous 'kNetworkError's in Server Admin.app when viewing services. It's acting as a gateway w/NAT and has been so for quite some time. There is one glaring issue, bootpd crashes all the time with the following errors in `/var/log/system.log/: Aug 12 16:54:59 servername bootpd[3572]: server starting Aug 12 16:54:59 servername bootpd[3572]: server name servername.domain.tld Aug 12 16:54:59 servername bootpd[3572]: interface en0: ip 10.0.1.9 mask 255.255.255.0 Aug 12 16:54:59 servername bootpd[3572]: bsdpd: re-reading configuration Aug 12 16:54:59 servername bootpd[3572]: bsdpd: shadow file size will be set to 48 megabytes Aug 12 16:54:59 servername bootpd[3572]: bsdpd: age time 00:15:00 Aug 12 16:54:59 servername bootpd[3572]: [3572] detected buffer overflow Aug 12 16:54:59 servername com.apple.launchd[1] (com.apple.bootpd[3572]): Job appears to have crashed: Abort trap Aug 12 16:54:59 servername com.apple.ReportCrash.Root[3571]: 2010-08-12 16:54:59.828 ReportCrash[3571:2807] Saved crash report for bootpd[3572] version ??? (???) to /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash It is correctly configured to serve DHCP through en1 (not en0), the "LAN" port. This happens even with no hardware (even switches) connected to the "LAN" port. There are no DHCP clients listed. Oddly, the "Overview" shows 1 static map, but nothing is listed under "Static Maps" and there are no "Computers" in Open Directory. /var/db/dhcp_leases is empty. /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash is as follows: Process: bootpd [3572] Path: /usr/libexec/bootpd Identifier: bootpd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2010-08-12 16:54:59.713 -0400 OS Version: Mac OS X Server 10.6.4 (10F569) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: __abort() called Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff803c13d6 __kill + 10 1 libSystem.B.dylib 0x00007fff80461913 __abort + 103 2 libSystem.B.dylib 0x00007fff80456157 mach_msg_receive + 0 3 libSystem.B.dylib 0x00007fff803b92cf __strncpy_chk + 14 4 bootpd 0x0000000100014e5d PLCache_read + 782 5 bootpd 0x0000000100004a3d BSDPClients_init + 68 6 bootpd 0x00000001000053b5 bsdp_init + 2396 7 bootpd 0x000000010000200b S_update_services + 1228 8 bootpd 0x0000000100002344 S_server_loop + 571 9 bootpd 0x0000000100003963 main + 1766 10 bootpd 0x0000000100000984 start + 52 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff5fbfe220 rcx: 0x00007fff5fbfe218 rdx: 0x0000000000000000 rdi: 0x0000000000000df4 rsi: 0x0000000000000006 rbp: 0x00007fff5fbfe240 rsp: 0x00007fff5fbfe218 r8: 0x0000000000000001 r9: 0x0000000100114280 r10: 0x00007fff803bd412 r11: 0xffffff80002e1680 r12: 0xffffffffffffffff r13: 0x00007fff5fbfe330 r14: 0x00007fff5fbfe33b r15: 0x00007fff7009bec0 rip: 0x00007fff803c13d6 rfl: 0x0000000000000202 cr2: 0x000000010004c000 Any thoughts or suggestions as to resolving this?

    Read the article

  • Suspicious process running under user named

    - by Amit
    I get a lot of emails reporting this and I want this issue to auto correct itself. These process are run by my server and are a result of updates, session deletion and other legitimate session handling reported as false positives. Here's a sample report: Time: Sat Oct 20 00:00:03 2012 -0400 PID: 20077 Account: named Uptime: 326117 seconds Executable: /usr/sbin/nsd\00507d27e9\0053\00\00\00\00\00 (deleted) The file system shows this process is running an executable file that has been deleted. This typically happens when the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. See csf.conf and the PT_DELETED text for more information about the security implications of processes running deleted executable files. Command Line (often faked in exploits): /usr/sbin/nsd -c /etc/nsd/nsd.conf Network connections by the process (if any): udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 udp: 127.0.0.1:53 -> 0.0.0.0:0 udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: 127.0.0.1:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 Files open by the process (if any): /dev/null /dev/null /dev/null Memory maps by the process (if any): 0045e000-00479000 r-xp 00000000 fd:00 2582025 /lib/ld-2.5.so 00479000-0047a000 r--p 0001a000 fd:00 2582025 /lib/ld-2.5.so 0047a000-0047b000 rw-p 0001b000 fd:00 2582025 /lib/ld-2.5.so 0047d000-005d5000 r-xp 00000000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d5000-005d7000 r--p 00157000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d7000-005d8000 rw-p 00159000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d8000-005db000 rw-p 005d8000 00:00 0 005dd000-005e0000 r-xp 00000000 fd:00 2582087 /lib/libdl-2.5.so 005e0000-005e1000 r--p 00002000 fd:00 2582087 /lib/libdl-2.5.so 005e1000-005e2000 rw-p 00003000 fd:00 2582087 /lib/libdl-2.5.so 0062b000-0063d000 r-xp 00000000 fd:00 2582079 /lib/libz.so.1.2.3 0063d000-0063e000 rw-p 00011000 fd:00 2582079 /lib/libz.so.1.2.3 00855000-0085f000 r-xp 00000000 fd:00 2582022 /lib/libnss_files-2.5.so 0085f000-00860000 r--p 00009000 fd:00 2582022 /lib/libnss_files-2.5.so 00860000-00861000 rw-p 0000a000 fd:00 2582022 /lib/libnss_files-2.5.so 00ac0000-00bea000 r-xp 00000000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bea000-00bfe000 rw-p 00129000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bfe000-00c01000 rw-p 00bfe000 00:00 0 00e68000-00e69000 r-xp 00e68000 00:00 0 [vdso] 08048000-08074000 r-xp 00000000 fd:00 927261 /usr/sbin/nsd 08074000-08079000 rw-p 0002b000 fd:00 927261 /usr/sbin/nsd 08079000-0808c000 rw-p 08079000 00:00 0 08a20000-08a67000 rw-p 08a20000 00:00 0 b7f8d000-b7ff2000 rw-p b7f8d000 00:00 0 b7ffd000-b7ffe000 rw-p b7ffd000 00:00 0 bfa6d000-bfa91000 rw-p bffda000 00:00 0 [stack] Would /etc/nsd/restart or kill -1 20077 solve the problem?

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Fix Windows Computer Problems with Microsoft Fix it Center

    - by Matthew Guay
    Fixing computer problems can often be difficult, but Microsoft is aiming to make it as simple as a couple clicks with.  Here’s how you can easily fix computer problems with Microsoft’s new Fix it Center Beta. Last year Microsoft began offering small Fix it scripts that you could download and run to help solve common computer problems automatically.  These were added to some of the most visited Windows help pages, and helped fix problems with things such as printing errors and Aero glass support.  Now, the Fix it scripts have been bundled together with the Fix it Center, making fixing your computer even easier.  This free tool works great on all editions of Windows XP, Vista, and Windows 7. Note: The Fix it Center is currently in beta, so only run if you are comfortable running beta software. Getting Started Download the Fix it Center installer (link below), and install as normal. The installer will download the remaining components, and then finish the installation. In Windows XP, if you have not yet installed .NET 2.0, you may see the following prompt.  Click Yes to go to the download site, and once you’ve installed .NET 2.0, run the Fix it Center setup again. Also, the Fix it Center uses PowerShell to automate its fixes, but if it is not installed yet the installer will automatically download and install it. Find Fixes for Your PC Once Fix it Center is installed, you can personalize it for your computer.  Select Now, and the click Next. It will scan your computer for problems with known solutions, and will offer to go ahead and install these troubleshooters.  If you choose to not install them, you can always download them from within the Fix it Center at a later time. While those troubleshooters are downloading, you can create a Fix it account.  This will give you additional help and support, and let you review Fix it solutions for all your computers from an online dashboard.  You need a Windows Live ID to create an account. Also, choose whether or not to send information to Microsoft about your hardware and software problems. Get Problems Fixed Now that the Fix it Center is installed and has identified issues on your computer, it’s time to get the problems fixed.  Here’s the default front screen in Windows 7, showing all of the available fixes. And here’s the Fix it Center running in Windows XP. Select one of the Troubleshooters to see more information about it, and click Run to start it. You can choose to either detect problems and have them fixed automatically, or you can choose for the Fix it Center to show you the solutions and let you choose whether to apply them or not.  The defaults usually work good, and only take a couple minutes to apply the fixes, but you can select your own fixes if you’d rather be in control. It will scan your computer for known problems in this area, and then will show you the results.  Here, Fix it determined that startup programs may be causing performance issues.  Select Start System Configuration, and uncheck any of the programs you do not usually use. Once you’ve run a troubleshooter, you can see the issues it checked for and any problems it discovered. If you created the online account, you can also choose to view the details online.  This will show all of your computers with Fix it Center and the fixes you’ve run on them.   Conclusion Whether you’re a power user or new to computers, sometimes it’s best to just get your problems fixed and go on with life instead of digging through the registry, forums, and hacking your way to a solution.  Remember the service is still in beta and may not work perfectly or solve your issues every time. But it’s something cool and worth a look. Links Download Microsoft Fix it Center Beta Fix additional problems with Microsoft’s Fix it Center Online Similar Articles Productive Geek Tips Disable Windows Mobility Center in Windows 7 or VistaMake Outlook Faster by Disabling Unnecessary Add-InsUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Disable Security Center Popup Notifications in Windows VistaHow To Manage Action Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • XNA Notes 006

    - by George Clingerman
    If you used to think the XNA community was small and inactive, hopefully these XNA Notes are opening your eyes. And I honestly feel like I’m still only catching the tail end of everything that’s going on. It’s a large and active community and you can be so mired down in one part of it you miss all sorts of cool stuff another part is doing. XNA is many things to a lot of people and that makes for a lot of really awesome things going on. So here’s what I saw going on this last week! Time Critical XNA New: XNA Team - Peer Review now closes for XNA 3.1 games http://blogs.msdn.com/b/xna/archive/2011/02/08/peer-review-pipeline-closed-for-new-xna-gs-3-1-games-or-updates-on-app-hub.aspx http://twitter.com/XNACommunity/statuses/34649816529256448 The XNA Team posts about a meet up with Microsoft for Creator’s going to be at GDC, March 3rd at the Lobby Bar http://on.fb.me/fZungJ XNA Team: @mklucher is busying playing the the bubblegum on WP7 made by a member of the XNA team (although reportedly made in Silverlight? Crazy! ;) ) http://twitter.com/mklucher/statuses/34645662737895426 http://bubblegum.me Shawn Hargreaves posts multiple posts (is this a sign that something new is coming from the XNA team? Usually when Shawn has time to post, something has just wrapped up…) Random Shuffle http://blogs.msdn.com/b/shawnhar/archive/2011/02/09/random-shuffle.aspx Doing the right thing: resume, rewind or skip ahead http://blogs.msdn.com/b/shawnhar/archive/2011/02/10/doing-the-right-thing-resume-rewind-or-skip-ahead.aspx XNA Developers: Andrew Russel was on .NET Rocks recently talking with Carl and Richard about developing games for Xbox, iPhone and Android http://www.dotnetrocks.com/default.aspx?ShowNum=635 Eric W. releases the Fishing Girl source code into the wild http://ericw.ca/blog/posts/fishing-girl-now-open-source/ http://forums.create.msdn.com/forums/p/74642/454512.aspx#454512 BinaryTweedDeej reminds that XNA community that Indie City wants you involved http://twitter.com/BinaryTweedDeej/statuses/34596114028044288 http://www.indiecity.com Mike McLaughlin (@mikebmcl) releases his first two XNA articles on the TechNet wiki http://social.technet.microsoft.com/wiki/contents/articles/xna-framework-overview.aspx http://social.technet.microsoft.com/wiki/contents/articles/content-pipeline-overview.aspx John Watte plays around with the Content Pipeline and Music Visualization exploring just what can be done. http://www.enchantedage.com/xna-content-pipeline-fft-song-analysis http://www.enchantedage.com/fft-in-xna-content-pipeline-for-beat-detection-for-the-win Simon Stevens writes up his talk on Vector Collision Physics http://www.simonpstevens.com/News/VectorCollisionPhysics @domipheus puts together an XNA Task Manager http://www.flickr.com/photos/domipheus/5405603197/ MadNinjaSkillz releases his fork of Nick's Easy Storage component on CodePlex http://twitter.com/MadNinjaSkillz/statuses/34739039068229634 http://ezstorage.codeplex.com @ActiveNick was interviewed by Rob Cameron and discusses Windows Phone 7, Bing Maps and XNA http://twitter.com/ActiveNick/statuses/35348548526546944 http://msdn.microsoft.com/en-us/cc537546 Radiangames (Luke Schneider) posts about converting his games from XNA to Unity http://radiangames.com/?p=592 UberMonkey (@ElementCy) posts about a new project in the works, CubeTest a Minecraft style terrain http://www.ubergamermonkey.com/personal-projects/new-project-in-the-works/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Ubergamermonkey+%28UberGamerMonkey%29 Xbox LIVE Indie Games (XBLIG): VideoGamer Rob review Bonded Realities http://videogamerrob.wordpress.com/2011/02/05/xblig-review-bonded-realities/ XBLIG Round Up on Gamergeddon http://www.gamergeddon.com/2011/02/06/xbox-indie-game-round-up-february-6th/ Are gamers still rating Indie Games after the Xbox Dashboard update? http://www.gamemarx.com/news/2011/02/06/are-gamers-still-rating-indie-games-after-the-xbox-dashboard-update.aspx Joystiq - Xbox Live Indie Gems: Corrupted http://www.joystiq.com/2011/02/04/xbox-live-indie-gems-corrupted/ Raymond Matthews of DarkStarMatryx reviews (Almost) Total Mayhem and Aban Hawkins & the 1000 Spikes http://www.darkstarmatryx.com/?p=225 http://www.darkstarmatryx.com/?p=229 8 Bit Horse reviews Aban Hawkins & the 1000 spikes http://8bithorse.blogspot.com/2011/01/aban-hawkins-1000-spikes-xbl-indie.html 2010 wrap-up for FunInfused Games http://www.krissteele.net/blogdetails.aspx?id=245 NeoGaf roundup of January's XBLIGs http://www.neogaf.com/forum/showthread.php?t=420528 Armless Ocotopus interviews Michael Ventnor creator of Bonded Realities http://www.armlessoctopus.com/2011/02/07/interview-michael-ventnor-of-red-crest-studios/ @recharge_media posts about the new city music for Woodvale in Sin Rising http://rechargemedia.com/2011/02/08/new-city-theme-woodvale/ @DrMisty posts some footage of YoYoYo in action http://www.mstargames.co.uk/mistryblogmain/54-yoyoyoblogs/184-video-update.html Xona Games - Decimation X3 on Reviews on the Run http://video.citytv.com/video/detail/782443063001.000000/reviews-on-the-run--february-8-2011/g4/ @benkane gives an early peek at his action RPG coming to XBLIG http://www.youtube.com/watch?v=bDF_PrvtwU8 Rock, Paper Shotgun talks to Zeboyd games about bringing Cthulhu Saves the World to PC http://www.rockpapershotgun.com/2011/02/11/summoning-cthulhu-natter-with-zeboyd/ Xbox LIVE Indieverse interviews the creator of Bonded Realities http://xbl-indieverse.blogspot.com/2011/02/xbl-indieverse-interview-red-crest.html XNA Game Development: Dream-In-Code posts about an upcoming XNA Challenge/Coding contest http://www.dreamincode.net/forums/blog/1385/entry-3192-xna-challengecontest/ Sgt.Conker covers Fishing Girl and IndieFreaks Game Framework release http://www.sgtconker.com/2011/02/fishing-girl-did-not-sell-a-single-copy/ http://www.sgtconker.com/2011/02/indiefreaks-game-framework-v0-2-0-0/ @slyprid releases Transmute v0.40a with lots of new features and fixes http://twitter.com/slyprid/statuses/34125423067533312 http://twitter.com/slyprid/statuses/35326876243337216 http://forgottenstarstudios.com/ Jeff Brown writes an XNA 4.0 tutorial on Saving/Loading on the Xbox 360 http://www.robotfootgames.com/xna-tutorials/92-xna-tutorial-savingloading-on-xbox-360-40 XNA for Silverlight Developers: Part 3- Animation http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+xna-connection-twitter-specific-stream+%28XNA+Connection%27s+Twitter+specific+stream%29 The news from Nokia is definitely something XNA developers will want to keep their eye on http://blogs.forum.nokia.com/blog/nokia-developer-news/2011/02/11/letter-to-developers?sf1066337=1

    Read the article

  • SQL SERVER – Spatial Database Queries – What About BLOB – T-SQL Tuesday #006

    - by pinaldave
    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that.  I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This blog is written in response to T-SQL Tuesday #006: “What About BLOB? by Michael Coles. Spatial Database is my favorite subject. Since I did my TechEd India 2010 presentation, I have enjoyed this subject a lot. Before I continue this blog post, there are a few other blog posts, so I suggest you read them.  To help build the environment run the queries, I am going to present them in this single blog post. SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spatial Indexing This blog post explains the basics of Spatial Database and also provides a good introduction to Indexing concept. SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database This blog post will enable you with how to load the shape file into database. SQL SERVER – Spatial Database Definition and Research Documents This blog post links to the white paper about Spatial Database written by Microsoft experts. SQL SERVER – Introduction to Spatial Coordinate Systems: Flat Maps for a Round Planet This blog post links to the white paper explaining coordinate system, as written by Microsoft experts. After reading the above listed blog posts, I am very confident that you are ready to run the following script. Once you create a database using the World Shapefile, as mentioned in the second link above,you can display the image of India just like the following. Please note that this is not an accurate political map. The boundary of this map has many errors and it is just a representation. You can run the following query to generate the map of India from the database spatial which you have created after following the instructions here. USE Spatial GO -- India Map SELECT [CountryName] ,[BorderAsGeometry] ,[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now, let us find the longitude and latitude of the two major IT cities of India, Hyderabad and Bangalore. I find their values as the following: the values of longitude-latitude for Bangalore is 77.5833300000 13.0000000000; for Hyderabad, longitude-latitude is 78.4675900000 17.4531200000. Now, let us try to put these values on the India Map and see their location. -- Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(20000); -- Hyderabad DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(20000); -- Bangalore and Hyderabad on Map of India SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now let us quickly draw a straight line between them. DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(10000); DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(10000); DECLARE @GeoLocation2 GEOGRAPHY SET @GeoLocation2 = GEOGRAPHY::STGeomFromText('LINESTRING(78.4675900000 17.4531200000, 77.5833300000 13.0000000000)',4326) SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I1 WHERE I1.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '' name, @GeoLocation2 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Let us use the distance function of the spatial database and find the straight line distance between this two cities. -- Distance Between Hyderabad and Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326) DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326) SELECT @GeoLocation.STDistance(@GeoLocation1)/1000 'KM'; GO The result of above query is as displayed in following image. As per SQL Server, the distance between these two cities is 501 KM, but according to what I know, the distance between those two cities is around 562 KM by road. However, please note that roads are not straight and they have lots of turns, whereas this is a straight-line distance. What would be more accurate is the distance between these two cities by air travel. When we look at the air travel distance between Bangalore and Hyderabad, the total distance covered is 495 KM, which is very close to what SQL Server has estimated, which is 501 KM. Bravo! SQL Server has accurately provided the distance between two of the cities. SQL Server Spatial Database can be very useful simply because it is very easy to use, as demonstrated above. I appreciate your comments, so let me know what your thoughts and opinions about this are. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • EHome IR receiver and Ubuntu 13 - any one have this working?

    - by squakie
    I have a "generic" USB IR receiver I purchased off of Ebay to make my life a little easier with XBMC on my Ubuntu box. I am currently running 13.10 and have never tried nor have any knowledge of IR in Ubuntu. I know of lirc, and I know a lot of it is now included in the kernel. My understanding is that lirc in basic terms maps pulses from an remote control to functions - like keyboard or mouse clicks. It is also my understanding that I might still need a driver or something for my device. lsusb shows the device as: Bus 006 Device 003: ID 147a:e016 Formosa Industrial Computing, Inc. eHome Infrared Receiver dmesg shows the following pertaining to the device: [43635.311985] usb 6-2: USB disconnect, device number 2 [43641.344387] usb 6-2: new full-speed USB device number 3 using ohci-pci [43641.543454] usb 6-2: New USB device found, idVendor=147a, idProduct=e016 [43641.543467] usb 6-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [43641.543473] usb 6-2: Product: eHome Infrared Transceiver [43641.543478] usb 6-2: Manufacturer: Formosa21 [43641.543483] usb 6-2: SerialNumber: FM000623 [43641.555736] Registered IR keymap rc-rc6-mce [43641.555968] input: Media Center Ed. eHome Infrared Remote Transceiver (147a:e016) as /devices/pci0000:00/0000:00:12.0/usb6/6-2/6-2:1.0/rc/rc2/input15 [43641.556221] rc2: Media Center Ed. eHome Infrared Remote Transceiver (147a:e016) as /devices/pci0000:00/0000:00:12.0/usb6/6-2/6-2:1.0/rc/rc2 [43641.556584] input: MCE IR Keyboard/Mouse (mceusb) as /devices/virtual/input/input16 [43641.557186] rc rc2: lirc_dev: driver ir-lirc-codec (mceusb) registered at minor = 0 [43641.731965] mceusb 6-2:1.0: Registered Formosa21 eHome Infrared Transceiver with mce emulator interface version 1 [43641.731978] mceusb 6-2:1.0: 2 tx ports (0x0 cabled) and 2 rx sensors (0x0 active) (excuse the double spacing, but I had to put in extra cr/lf using "enter" or the entire thing was just one long unreadable string). When I connect the same IR receiver to a Raspberry Pi running OpenELEC/XBMC there is no flashing led unless I press a remote button, and the device works. In Ubuntu, the led is constantly blinking, and nothing happens when I press a remote key. I tried the command line program to test but it never echoes anything back to the terminal window. I believe it must need some sort of driver or something else, but I am completely in the dark on this. If it matters I also have: - Logitech wireless keyboard/mouse USB receiver - Tenda USB wireless adapter And.....I've also noticed some errors now that show in dmesg that seem to somehow related to HDMI if that makes any sense: 46721.144731] HDMI: ELD buf size is 0, force 128 [46721.144749] HDMI: invalid ELD data byte 0 [46721.444025] HDMI: ELD buf size is 0, force 128 [46721.444061] HDMI: invalid ELD data byte 0 [46721.743375] HDMI: ELD buf size is 0, force 128 [46721.743411] HDMI: invalid ELD data byte 0 [46722.043092] HDMI: ELD buf size is 0, force 128 [46722.043118] HDMI: invalid ELD data byte 0 [46722.343086] HDMI: ELD buf size is 0, force 128 [46722.343122] HDMI: invalid ELD data byte 0 [46722.642517] HDMI: ELD buf size is 0, force 128 [46722.642574] HDMI: invalid ELD data byte 0 [46722.942459] HDMI: ELD buf size is 0, force 128 [46722.942485] HDMI: invalid ELD data byte 0 [46723.242103] HDMI: ELD buf size is 0, force 128 [46723.242129] HDMI: invalid ELD data byte 0 [46723.541877] HDMI: ELD buf size is 0, force 128 [46723.541923] HDMI: invalid ELD data byte 0 [58366.651954] HDMI: ELD buf size is 0, force 128 [58366.651980] HDMI: invalid ELD data byte 0 [58366.951523] HDMI: ELD buf size is 0, force 128 [58366.951549] HDMI: invalid ELD data byte 0 [58367.251075] HDMI: ELD buf size is 0, force 128 [58367.251121] HDMI: invalid ELD data byte 0 [58367.550517] HDMI: ELD buf size is 0, force 128 [58367.550563] HDMI: invalid ELD data byte 0 [58367.850219] HDMI: ELD buf size is 0, force 128 [58367.850256] HDMI: invalid ELD data byte 0 [58368.150160] HDMI: ELD buf size is 0, force 128 [58368.150185] HDMI: invalid ELD data byte 0 [58368.449544] HDMI: ELD buf size is 0, force 128 [58368.449570] HDMI: invalid ELD data byte 0 [58368.749583] HDMI: ELD buf size is 0, force 128 [58368.749629] HDMI: invalid ELD data byte 0 [58369.049280] HDMI: ELD buf size is 0, force 128 [58369.049326] HDMI: invalid ELD data byte 0 [58394.706273] HDMI: ELD buf size is 0, force 128 [58394.706300] HDMI: invalid ELD data byte 0 [58394.706350] HDMI: ELD buf size is 0, force 128 [58394.706367] HDMI: invalid ELD data byte 0 [58395.003032] HDMI: ELD buf size is 0, force 128 [58395.003058] HDMI: invalid ELD data byte 0 [58395.302680] HDMI: ELD buf size is 0, force 128 [58395.302705] HDMI: invalid ELD data byte 0 [58395.602442] HDMI: ELD buf size is 0, force 128 [58395.602477] HDMI: invalid ELD data byte 0 [58395.902143] HDMI: ELD buf size is 0, force 128 [58395.902179] HDMI: invalid ELD data byte 0 [58396.201839] HDMI: ELD buf size is 0, force 128 [58396.201875] HDMI: invalid ELD data byte 0 [58396.501538] HDMI: ELD buf size is 0, force 128 [58396.501574] HDMI: invalid ELD data byte 0 [58396.801232] HDMI: ELD buf size is 0, force 128 [58396.801268] HDMI: invalid ELD data byte 0 [58397.100583] HDMI: ELD buf size is 0, force 128 [58397.100627] HDMI: invalid ELD data byte 0 [63095.766042] systemd-hostnamed[8875]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname! dave@davepc:~$ EDIT: Maybe another way to look at this would be what does Ubuntu do or not do that OpenELEC does or doesn't do (on Raspberry Pi) such that it works in OpenELEC but not in Ubuntu?

    Read the article

  • View Docs and PDFs Directly in Google Chrome

    - by Matthew Guay
    Would you like to view documents, presentations, and PDFs directly in Google Chrome?  Here’s a handy extension that makes Google Docs your default online viewer so don’t have to download the file first. Getting Started By default, when you come across a PDF or other common document file online in Google Chrome, you’ll have to download the file and open it in a separate application. It’d be much easier to simply view online documents directly in Chrome.  To do this, head over to the Docs PDF/PowerPoint Viewer page on the Chrome Extensions site (link below), and click Install to add it to your browser. Click Install to confirm that you want to install this extension. Extensions don’t run by default in Incognito mode, so if you’d like to always view documents directly in Chrome, open the Extensions page and check Allow this extension to run in incognito. Now, when you click a link for a document online, such as a .docx file from Word, it will open in the Google Docs viewer. These documents usually render in their original full-quality.  You can zoom in and out to see exactly what you want, or search within the document.  Or, if it doesn’t look correct, you can click the Download link in the top left to save the original document to your computer and open it in Office.   Even complex PDF render very nicely.  Do note that Docs will keep downloading the document as you’re reading it, so if you jump to the middle of a document it may look blurry at first but will quickly clear up. You can even view famous presentations online without opening them in PowerPoint.  Note that this will only display the slides themselves, but if you’re looking for information you likely don’t need the slideshow effects anyway.   Adobe Reader Conflicts If you already have Adobe Acrobat or Adobe Reader installed on your computer, PDF files may open with the Adobe plugin.  If you’d prefer to read your PDFs with the Docs PDF Viewer, then you need to disable the Adobe plugin.  Enter the following in your Address Bar to open your Chrome Plugins page: chrome://plugins/ and then click Disable underneath the Adobe Acrobat plugin. Now your PDFs will always open with the Docs viewer instead. Performance Who hasn’t been frustrated by clicking a link to a PDF file, only to have your browser pause for several minutes while Adobe Reader struggles to download and display the file?  Google Chrome’s default behavior of simply downloading the files and letting you open them is hardly more helpful.  This extension takes away both of these problems, since it renders the documents on Google’s servers.  Most documents opened fairly quickly in our tests, and we were able to read large PDFs only seconds after clicking their link.  Also, the Google Docs viewer rendered the documents much better than the HTML version in Google’s cache. Google Docs did seem to have problem on some files, and we saw error messages on several documents we tried to open.  If you encounter this, click the Download link in the top left corner to download the file and view it from your desktop instead. Conclusion Google Docs has improved over the years, and now it offers fairly good rendering even on more complex documents.  This extension can make your browsing easier, and help documents and PDFs feel more like part of the Internet.  And, since the documents are rendered on Google’s servers, it’s often faster to preview large files than to download them to your computer. Link Download the Docs PDF/PowerPoint Viewer extension from Google Similar Articles Productive Geek Tips Integrate Google Docs with Outlook the Easy WayGoogle Image Search Quick FixView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeHow To Export Documents from Google Docs to Your Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >