Search Results

Search found 1617 results on 65 pages for 'digital'.

Page 13/65 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • #1 O’Reilly eBook for 2010

    - by Jan Goyvaerts
    The year-end issue of O’Reilly’s author newsletter discussed the trends O’Reilly has been seeing the past few years, and their predictions for 2011. The key trend is that digital is now more than ever poised to take over print: Our digitally distributed products have grown from 18.36% of our publishing mix in 2009 to 28.09% of our mix in 2010. What is more impressive is that our digitally distributed products have produced more than double the revenue that has been lost with the decline of print. I think this is important because some say that digital cannibalizes print products. Our data indicates the contrary, as print is declining much more slowly than digital is growing. I think we may be seeing developers purchasing a print book, and then purchasing the electronic editions to search and copying code from, as the incremental cost for digital is more than reasonable. My own book seems to be leading this trend. Thanks to everyone who purchased it! And the five bestselling O’Reilly ebook products for 2010: 1) Regular Expressions Cookbook, 2) jQuery Cookbook, 3) Learning Python, 4) HTML5: Up and Running, and 5) JavaScript Cookbook. I think it’s interesting that the top five ebooks are code-intensive books. They’re great products for search and code reuse. It’s also interesting that none of the top 5 ebooks made the top 5 of print books.

    Read the article

  • hdmi AC-3 audio broke after upgrading from 11.10 to 12.04.3

    - by Jim LastName
    I just updated my MythBuntu 11.10 to 12.04.3. Now, when I try to play 5.1 content (ripped DVD), my TV (and receiver) plays a "chattering" sound. I check my receiver and the digital dolby light isn't on--it's in PCM mode. So, either the audio is getting sent as AC-3, but the TV and receiver think it's PCM or the AC-3 audio got converted to multichannel PCM and they can't handle it. My setup: hdmi cable from htpc to TV. TV has an s/pdif output to my receiver. I know TV sends AC-3 audio out correctly because I see digital dolby light come on when I view a digital TV channel and PCM come on when I view an old analog channel. I can connect s/pdif from my htpc to my receiver and the digital dolby light comes on and it can decode the audio just fine. It's just not sending it right over hdmi. Now for some hints to the issue: I noticed in MythTV audio setup when I select alsa:hdmi.... the description only lists 2 channel PCM audio capability. speaker-test -Dhdmi:PCH -c6 errors about a bad channel count (only -c2 works). Finally, I tried vlc and it does the same chattering sound. These all make me think this isn't a MythTV issue, it's something lower than that. I think the best way to troubleshoot this is to start at the drivers and check each layer, one at a time all the way to alsa. I just don't know what the layers are and how to do it. So, I need to find some audio troubleshooting guide to assist me. Or, if one doesn't exist, I'd appreciate some steps. Thanks much, Jim

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Lending epub files for limited time to users

    - by JP Hellemons
    I am looking for components to build a digital library who lends people epub (ebooks) for about a week. It's like a digital version of the offline old public library. Now I have found several flash (pdf) file streaming solutions. But that would require an active internet link. And like the public library, you are able to take your books to the beach or pool on holiday abroad where you have no connection. So streaming is no option. The other file restriction method I have found was DRM, but that would require a really expensive license of Adobe Content Server 4 which is not suitable for my little hobby project. But it seems that adobe content server and Adobe Digital Editions is the only option at the moment. Or are there open source alternatives?

    Read the article

  • HTG Explains: Photography with Film-Based Cameras

    - by Eric Z Goodnight
    We’ve become reliant on digital cameras since they are so easy to use. But have you ever wondered how film-based photography works? Read on to increase your photographic knowledge—or to develop an new appreciation for your point and click camera. Film-based cameras, to some, are a relic of the past. Simply an old technology made obsolete by the new and improved. But to many, film is an artisan’s material, and a photographic experience no digital system could hope to ever recreate. While many photographers, professional and amateur will swear by the quality of both film-based or digital cameras—the fact remains that film is still a valid way to take great photographs, and a fascinating way to learn more about how photography works.  HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

  • The Island of Lost Apple Products

    - by Jason Fitzpatrick
    While Apple has has a mountain of commercial successes, every once in awhile the crew in Cupertino strikes out. Here are some of the less successful and prematurely retired Apple products from the last two decades. Courtesy of Wired, we find nine of the least favorably received products in the Apple portfolio. Pictured here, the QuickTake Camera: Life Span: 1994 – 1997 Back in 1994, Apple was actually at the forefront of digital photography. The QuickTake Camera’s photos (640 x 480 at 0.3 megapixels) were borderline unusable for anything other than your Geocities homepage. But technology has to start somewhere. Still, Apple killed the line after just three years. And while the iPhone and other smartphones have replaced most people’s digital cameras, Apple could have had a reaped the benefits of the digital point-and-shoot salad years. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • 7-Eleven Mobile App Powered by Oracle SOA Suite

    - by Bruce Tierney
    When you slurp that Slurpee, do you ever think about the sub 100 millisecond processing of 20 million 7-Eleven digital transactions ever day supported by Oracle SOA Suite?  Maybe next time.  Check out this impressive video of Ronald Clanton, 7-Eleven's Digital Guest Experience Program Manager, describing how 7-Eleven provides a consistent view across all the end points of over 10,000 stores and their digital entities by using Oracle SOA Suite on Oracle Exalogic.  Managed by Oracle Enterprise Manager, they were able to provision their "Rapid-Fire" Middleware as a Service (MWaaS) in only "10 minutes" and deliver on time and complete testing ahead of schedule. So what are you waiting for?  Download your Slurpee App to get your free Pillsbury Cinnamon pastry and enjoy your contribution to the 20 million messages/day.   When your done, take picture of your tongue...red or blue?  Watch the video here:

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • .com domain transfer failing

    - by digital
    Hi, I'm trying to transfer one of my .com addresses between registrars. I'm down as the owner contact (confirmed working) and the losing registrar is down as the tech and admin contact. Last week I received an email stating that the domain transfer had been rejected by the losing registrar. I contacted the losing registrar and they denied that. My money from the winning registrar was refunded and I was told to try again. I've initiated the transfer again and received confirmation of pending transfer, I gave the correct EPP code and confirmed the transfer. Currently the status on the domain is set as OK, should it not be transfer pending? According to my name.com transfer page if the transfer is not authd in 5 days it will auto transfer anyway. I don't believe this will happen. Name.com have been really helpful but they can't really do much more now. The losing registrar is not being helpful hence me turning here. What can I do to make sure the domain transfers? The domain transfer is set to expire on the 17th. Any help would be greatly appreciated.

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • Godaddy domain and Bluehost web hosting

    - by Digital site
    I have a domain from Godaddy and web hosting at Bluehost. I want to make this work as some people say no need to transfer the domain from Godaddy to Bluehost. I was trying to find out how to get this work out by adding name servers for Bluehost ns1.bluehost.com ns2.bluehost.com at Godaddy. This works fine, but not sure if 100% OK yet. The reason why I say that is when I type in my address name on any browser this way: mydomain.com it doesn't work. Instead I get an error message stating that this server is not found or couldn't connect to it... However, when I write the domain name and include the www. prefix it works fine... The other problem is when I search in google or yahoo, the domain shows like this: mydomain.com , which is not really good because my clients think my site is down because of the error message, and most new people don't know if they have to add www. to the domain to work. I just want to make at least the domain works like this: mydomain.com

    Read the article

  • Context menu event handling error - CS1061

    - by MrTemp
    I am still new to c# and wpf This program is a clock with different view and I would like to use the context menu to change between view, but the error says that there is no definition or extension method for the events. Right now I have the event I'm working on popping up a MessageBox just so I know it has run, but I cannot get it to compile. public partial class MainWindow : NavigationWindow { public MainWindow() { //InitializeComponent(); } public void AnalogMenu_Click(object sender, RoutedEventArgs e) { /*AnalogClock analog = new AnalogClock(); this.NavigationService.Navigate(analog);*/ } public void DigitalMenu_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Digital Clicked"); /*DigitalClock digital = new DigitalClock(); this.NavigationService.Navigate(digital);*/ } public void BinaryMenu_Click(object sender, RoutedEventArgs e) { /*BinaryClock binary = new BinaryClock(); this.NavigationService.Navigate(binary);*/ } } and the xaml call if you want it <NavigationWindow.ContextMenu> <ContextMenu Name="ClockMenu" > <MenuItem Name="ToAnalog" Header="To Analog" ToolTip="Changes to an analog clock"/> <MenuItem Name="ToDigital" Header="To Digital" ToolTip="Changes to a digital clock" Click="DigitalMenu_Click" /> <MenuItem Name="ToBinary" Header="To Binary" ToolTip="Changes to a binary clock"/> </ContextMenu> </NavigationWindow.ContextMenu>

    Read the article

  • Clients didn't switch to secondary DNS server during fail over

    - by The Digital Ninja
    I have two internal dns servers set up and all my servers have both of them in the resolv.conf Our main dns server went down and suddenly no server could see each other. I edited a few of the servers resolv.conf manually and committed out the first (down) dns server and that machine would instantly be able to ping again. What did I do wrong, does it not auto switch to the secondary dns server when it times out? # File managed by puppet nameserver 192.168.146.100 nameserver 192.168.159.101 ;nameserver 72.14.188.5 domain example.com search example.com

    Read the article

  • Segfault with rtorrent on Debian Lenny

    - by digital
    Hi, My debian lenny server keeps segfaulting with rtorrent, it happens once every 24 hours. Libcurl has been recompiled to the latest version and it still seems to happen. I'm not the best when it comes to linux server admin but if you require more info about the system I'll try and get it for you. lib/rtorrent are 0.8.5/0.12.5 Any help would be appreciated as I'd like rtorrent up 24/7 Caught Segmentation fault, dumping stack: 0 rtorrent [0x439686] 1 rtorrent [0x43e06a] 2 /lib/libc.so.6 [0x7f73ce780f60] 3 /usr/lib/libcurl.so.4 [0x7f73d04f4431] 4 /usr/lib/libcurl.so.4 [0x7f73d04f47da] 5 /usr/lib/libcurl.so.4(curl_multi_remove_handle+0x341) [0x7f73d050acb1] 6 rtorrent [0x480221] 7 rtorrent [0x482915] 8 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1f95] 9 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1fea] 10 /usr/local/lib/libtorrent.so.11 [0x7f73d02b4cfc] 11 rtorrent [0x48058a] 12 rtorrent [0x439f49] 13 /lib/libc.so.6(__libc_start_main+0xe6) [0x7f73ce76d1a6] 14 rtorrent(_ZNSt8ios_base4InitD1Ev+0x71) [0x40ea99]

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • Segfault with rtorrent on Debian Lenny

    - by digital
    Hi, My debian lenny server keeps segfaulting with rtorrent, it happens once every 24 hours. Libcurl has been recompiled to the latest version and it still seems to happen. I'm not the best when it comes to linux server admin but if you require more info about the system I'll try and get it for you. lib/rtorrent are 0.8.5/0.12.5 Any help would be appreciated as I'd like rtorrent up 24/7 Caught Segmentation fault, dumping stack: 0 rtorrent [0x439686] 1 rtorrent [0x43e06a] 2 /lib/libc.so.6 [0x7f73ce780f60] 3 /usr/lib/libcurl.so.4 [0x7f73d04f4431] 4 /usr/lib/libcurl.so.4 [0x7f73d04f47da] 5 /usr/lib/libcurl.so.4(curl_multi_remove_handle+0x341) [0x7f73d050acb1] 6 rtorrent [0x480221] 7 rtorrent [0x482915] 8 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1f95] 9 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1fea] 10 /usr/local/lib/libtorrent.so.11 [0x7f73d02b4cfc] 11 rtorrent [0x48058a] 12 rtorrent [0x439f49] 13 /lib/libc.so.6(__libc_start_main+0xe6) [0x7f73ce76d1a6] 14 rtorrent(_ZNSt8ios_base4InitD1Ev+0x71) [0x40ea99]

    Read the article

  • force all urls to www and force domain to non-www

    - by Digital site
    I was trying to force my domain to redirect without www and could success through this code: .htaccess: RewriteCond %{HTTP_HOST} ^www\.domain\.com [NC] RewriteRule ^(.*) http://domain.com/$1 [R=301,L] however, this code is going to redirect all www to non-www, which is not what I want. I just want to make the main domain from www.mydomain.com to mydomain.com and the rest of the urls should be forced to www. any idea how to add or modify the code so I can achieve that through .htaccess ? Update: Thanks to all. I found out that swf file from piecemaker was corrupted and updated it with new one. so now it is all fine and works on both www and non-www. I'm still curious how to solve this issue anyways using .htaccess. Thanks again.

    Read the article

  • winbind failing after a semi-random amount of time

    - by The Digital Ninja
    I have winbind set up to authenticate to our AD for samba shares. This is the third such server, and the only one having any issues. It seems after a random amount of time samba shares will just stop working. Winbind processes seem to be running but restarting them seems to fix the issue for a while. Looking at the logs have been kind of hit an miss and I don't know exactly when it fails. One interesting thing is that it seems to be pulling from another domain controller that it shoudlnt. I censored out the domain name in this example. But isnt there some way to block authentication to a domain? I'm not sure if this is a symptom or a cause of the issue. [2010/10/18 08:02:10, 0] winbindd/winbindd_cache.c:initialize_winbindd_cache(2577) initialize_winbindd_cache: clearing cache and re-creating with version number 1 [2010/10/18 09:15:54, 1] libsmb/clikrb5.c:ads_krb5_mk_req(686) ads_krb5_mk_req: krb5_get_credentials failed for [email protected] (Cannot find KDC for requested realm) [2010/10/18 09:15:54, 1] libsmb/cliconnect.c:cli_session_setup_kerberos(624) cli_session_setup_kerberos: spnego_gen_negTokenTarg failed: Cannot find KDC for requested realm [2010/10/18 09:15:54, 0] lib/util_sock.c:write_data(1139) write_data: write failure. Error = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:write_socket(242) write_socket: Error writing 108 bytes to socket 18: ERRNO = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:cli_send_smb(290) Error writing 108 bytes to client. -1 (Connection reset by peer)

    Read the article

  • proxy_ajp wildcards

    - by The Digital Ninja
    I need to setup apache so that any site.com/ANYTHING/servlet/ANYTHING goes over ajp into tomcat, but regular files will go through apache still. I have been messing around with this to no avail <LocationMatch "./*/servlet/*"> Order Allow,Deny Allow from all ProxyPass ajp://localhost:8009/ ProxyPassReverse / </LocationMatch> This works at directing everything to our tomcat insance. ProxyPass / ajp://localhost:8009/

    Read the article

  • How to keep track of time.

    - by The Digital Ninja
    This is just a general question. I started working from home a few months ago and i find the hardest part is trying to keep track of what I'm working on and how much time was spent. I do both programming and network admin work. Is there any software packages (free) out there that some of you use?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >