Search Results

Search found 17913 results on 717 pages for 'old school rules'.

Page 541/717 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.?

    - by Asian Angel
    When it comes to computers, Windows definitely rules the desktop in comparison to other operating systems. What we would like to know this week is if you would actually be willing to give up using Windows altogether and move to a different operating system on your computers. Note: This week’s Ask the Readers post is posing a hypothetical situation, so please refrain from starting arguments or a flame war in the comments. Good reasoned discussion is always welcome. There is no doubt that Windows is the dominant operating system in use today. Everywhere you go or look it is easy to find computers with Windows installed such as at work, home, the library, government offices, and more. For many people it is the operating system that they know and are comfortable with, which makes changing to a different operating system less appealing. Adding to the preference for Windows (or dependency based on your view) is the custom software that many businesses use on a daily basis. Throw in the high volume of people who depend on and use Microsoft Office as a standard for their business documents and it is little wonder that Windows is so dominant. So what would you use if you did decide to take a break from or permanently move away from Windows? If your choice is Linux then you have a large and wonderful variety of distributions to choose from based on what you want out of your system. Want a distribution that is easy to work with? You could choose Ubuntu, Linux Mint, or others that are engineered to be ready to go “out of the box”. Like a challenge? Perhaps Arch Linux is more your style. One of the most attractive features of all about Linux is the price…it is very hard to beat free! Maybe Mac OS X sounds like the perfect choice. It has a certain mystique and elegance associated with it and many OS X fans refuse to use anything else if given a choice. Then there is the soon to be released Chrome OS with its’ emphasis on cloud computing. This is a system that is definitely focused on being as low-maintenance and hassle-free as possible. Quick on, quick off, minimalist, and made to be portable. All of the system’s updates will occur automatically leaving you free to work and play in the cloud. But it does have its’ limitations…no installing all of those custom apps that you love using on Windows or other systems…it is literally all about the browsing window and web apps. So there you have it. If the opportunity presented itself would you, could you give Windows up and use a different operating system? Would it be easy or hard for you to do? Perhaps it would not really matter so long as you could do what you needed or wanted to do on a computer. And maybe this is the perfect time to try something new and find out…that new favorite operating system could be just an install disc away. Let us know your thoughts in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Brothers Mario – Epic Gangland Style Mario Brothers Movie Trailer [Video] Score Awesome Games on the Cheap with the Humble Indie Bundle Add a Colorful Christmas Theme to Your Windows 7 Desktop This Windows Hack Changes the Blue Screen of Death to Red Edit Images Quickly in Firefox with Pixlr Grabber Zoho Writer, Sheet, and Show Now Available in Chrome Web Store

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • Configure Calendar Server 7 to Use the davUniqueId Attribute

    - by dabrain
    Starting with Calendar Server 7 Update 3 (Patch 08) we introduce a new attribute davUniqueId in the davEntity objectclass, to use as the unique identifier.  The reason behind this is quite simple, the LDAP operational attribute nsUniqueId  has been chosen as the default value used for the unique identifier. It was discovered that this choice has a potential serious downside. The problem with using nsUniqueId is that if the LDAP entry for a user, group, or resource is deleted and recreated in LDAP, the new entry would receive a different nsUniqueId value from the Directory Server, causing a disconnect from the existing account in the calendar database. As a result, recreated users cannot access their existing calendars. How To Configure Calendar Server to Use the davUniqueId Attribute? Populate the davUniqueId to the ldap users. You can create a LDIF output file only or (-x option) directly run the ldapmodify from the populate-davuniqueid shell script. # ./populate-davuniqueid -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -b "o=red" -O -o /tmp/out.ldif The ldapmodify might failed like below, in that case the LDAP entry already have the 'daventity' objectclass, in those cases run populate-davuniqueid script without the -O option. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" ldapmodify: Type or value exists (20) In this case the user 'mparis' already have the objectclass 'daventity', ldapmodify do not take care of this DN and just take the next DN (if you start ldapmodify with -c option otherwise it stop's completely) dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: objectclass objectclass: daventity - add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d Even run populate-davuniqueid without -O option or changing the outputfile to dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d The ldapmodify works fine now. The only issue I see here is you need verify which user might need the 'daventity' objectclass as well. On the other hand start without the objectclass and only add the objectclass for the users where you get 'Objectclass violation' report. That's indicate the objectclass is missing. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" Now it is time to change the configuration to use the davuniquid attribute # ./davadmin config modify -o davcore.uriinfo.permanentuniqueid -v davuniqueid It is also needed to modfiy the search filter to use davuniqueid instead of nsuniqueid # ./davadmin config modify -o davcore.uriinfo.subjectattributes -v "cn davstore icsstatus mail mailalternateaddress davUniqueId  owner preferredlanguageuid objectclass ismemberof uniquemember memberurl mgrprfc822mailmember" Afterward IWC Calendar works fine and my test user able to access all his old events.

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • PPTP VPN connects via NM but goes down during SSH connection

    - by Andrea Olivato
    I setup a VPN PPTP connection via network manager and it connects correctly (I see the lock near the notification icon and the message "Vpn connection has been successfully...") As soon as I try to perform any SSH connection via the established tunnel the connection itself goes down with the message "Vpn connection failed". the SSH connection always fails at debug1: SSH2_MSG_KEXINIT sent I've looked into the system logs and this is the log Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> Starting VPN service 'pptp'... Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 7093 Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' appeared; activating connections Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: init (1) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: starting (3) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (Connect) reply received. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP4 Config Get) reply received from old-style plugin. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN Gateway: 5.98.141.210 Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP Config Get) complete. Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: started (4) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopping (5) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopped (6) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state change reason: 0 Dec 12 12:25:15 ushuaia NetworkManager[1155]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Dec 12 12:25:20 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' disappeared Please note that the same vpn is configured on my colleagues Windows 7 and works without problem when they use putty to connect via SSH

    Read the article

  • Repeated disconnects on WPA PEAP network

    - by exasperated
    My school has a WPA PEAP network with GTC inner authentication. I am able to connect to the network, but once I load a website or two, the network become unresponsive (i.e. in Chromium, it gets stuck at "Sending request"), and I'm eventually disconnected. Any help will be greatly appreciated. Here's some log output. I can provide more if needed: Ubuntu 13.04 3.8.0-32-generic x86_64 lsusb: 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) lsmod: iwldvm                241872  0  mac80211              606457  1 iwldvm iwlwifi               173516  1 iwldvm cfg80211              511019  3 iwlwifi,mac80211,iwldvm dmesg: [    3.501227] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [    3.503541] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 [    3.527153] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG disabled [    3.527162] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS enabled [    3.527170] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled [    3.527178] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [    3.527186] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_P2P disabled [    3.527192] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6235 AGN, REV=0xB0 [    3.527240] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [    3.551049] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [  375.153065] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.159727] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [  375.553201] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.559871] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 1892.110738] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 1892.117357] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5227.235372] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5227.242122] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5817.817954] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5817.824560] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5824.571917] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 5824.571929] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 5824.571935] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 6956.290061] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 6956.296671] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 6963.080560] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 6963.080566] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 6963.080570] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 7613.469241] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 7613.475870] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 7620.201265] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 7620.201278] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 7620.201285] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 8232.762453] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 8232.769065] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 8239.581772] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 8239.581784] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 8239.581792] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [13763.634808] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [13763.641427] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [16955.598953] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [16955.605574] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 lshw:    *-network        description: Wireless interface        product: Centrino Advanced-N 6235        vendor: Intel Corporation        physical id: 0        bus info: pci@0000:03:00.0        logical name: wlan0        version: 24        serial: b4:b6:76:a0:4b:3c        width: 64 bits        clock: 33MHz        capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless        configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-32-generic firmware=18.168.6.1 ip=10.250.169.96 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn        resources: irq:46 memory:f7c00000-f7c01fff iwlist scan: Cell 02 - Address: 24:DE:C6:B0:C7:D9                     Channel:36                     Frequency:5.18 GHz (Channel 36)                     Quality=29/70  Signal level=-81 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=0000004ff3fe419b                     Extra: Last beacon: 27820ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030124                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1624001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3424001B000000FF000000000000000000000000000000           Cell 04 - Address: 24:DE:C6:B0:C3:E9                     Channel:149                     Frequency:5.745 GHz                     Quality=28/70  Signal level=-82 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=000000181f60e19c                     Extra: Last beacon: 28680ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030195                     IE: Unknown: 050400010000                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1695001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3495001B000000FF000000000000000000000000000000                     IE: Unknown: DD07000B8601040817                     IE: Unknown: DD0E000B860103006170313930333032           Cell 09 - Address: 24:DE:C6:B0:C0:29                     Channel:149                     Frequency:5.745 GHz                     Quality=39/70  Signal level=-71 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=00000112fb688ede                     Extra: Last beacon: 27716ms ago ifconfig (while connected): wlan0     Link encap:Ethernet  HWaddr b4:b6:76:a0:4b:3c             inet addr:10.250.16.220  Bcast:10.250.31.255  Mask:255.255.240.0           inet6 addr: fe80::b6b6:76ff:fea0:4b3c/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:230023 errors:0 dropped:0 overruns:0 frame:0           TX packets:130970 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:255999759 (255.9 MB)  TX bytes:16652605 (16.6 MB) iwconfig (while connected): wlan0     IEEE 802.11abgn  ESSID:"CatChat2x"             Mode:Managed  Frequency:5.745 GHz  Access Point: 24:DE:C6:B0:C0:29              Bit Rate=6 Mb/s   Tx-Power=15 dBm              Retry  long limit:7   RTS thr:off   Fragment thr:off           Power Management:off           Link Quality=36/70  Signal level=-74 dBm             Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0           Tx excessive retries:0  Invalid misc:3   Missed beacon:0

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • ZFS pool broken after upgrading to 14.04 LTS

    - by cruiserparts
    Well, I have been putting off upgrading to 14.04 for fear that I would break something. Actually for fear that it would break zfs (or I would break it). I am bascially slightly better than novice at linux. Spent the last couple of hours trying to get the pool back. Now I am at the stage where I don't think I have a complete failure, but I am worried that I may break it. So if could help me not break it, and recover it, I would be thankful. My zfs is file storage and not boot. It was working fine for a year and was working perfectly before the upgrade (scrub and everything was fine). I was confident that the upgrade would work (or at least I could fix it) because I had upgraded once in the past, the pool went missing, but I was able to get it back. I have reinstalled zfs, zfs utilities, and some dependencies (after searching this forum) I think what happened is 14.04 deleted some config file, or specified disk names differntly, but I could be wrong. When I set the pool up originally, I was using specific device Ids as I recall (because I did not want to break things if they got reassigned at boot) So see if this helps. I can confirm that old mountpoint folders are there but empty. no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory pool: naspool1 state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM naspool1 UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas scsi-SATA_WDC_WD1001FALS-_WD-WMATV0990825 UNAVAIL 0 0 0 scsi-SATA_WDC_WD1001FALS-_WD-WMATV2995365 UNAVAIL 0 0 0 scsi-SATA_WDC_WD10EARS-00_WD-WMAV51894349 UNAVAIL 0 0 0 ___@ourserver:~$ sudo zpool import naspool1 cannot import 'naspool1': a pool with that name is already created/imported, and no additional pools with that name were found ___@ourserver:~$ sudo zfs list no datasets available What other output can I post to help? I'm thinking the update deleted some zfs config files. It seems like the pool exists and certainly 3 perfectly working disks did not fail at once. I am worried that I may break something without a little bit of guideance. Thanks.

    Read the article

  • What's the best way to install the GD graphics library for Nagios?

    - by user1196
    While trying to install Nagios 3.2.3, I ran their ./configure script and got these errors: checking for main in -liconv... no checking for gdImagePng in -lgd (order 1)... no checking for gdImagePng in -lgd (order 2)... no checking for gdImagePng in -lgd (order 3)... no checking for gdImagePng in -lgd (order 4)... no *** GD, PNG, and/or JPEG libraries could not be located... ********* Boutell's GD library is required to compile the statusmap, trends and histogram CGIs. Get it from http://www.boutell.com/gd/, compile it, and use the --with-gd-lib and --with-gd-inc arguments to specify the locations of the GD library and include files. NOTE: In addition to the gd-devel library, you'll also need to make sure you have the png-devel and jpeg-devel libraries installed on your system. NOTE: After you install the necessary libraries on your system: 1. Make sure /etc/ld.so.conf has an entry for the directory in which the GD, PNG, and JPEG libraries are installed. 2. Run 'ldconfig' to update the run-time linker options. 3. Run 'make clean' in the Nagios distribution to clean out any old references to your previous compile. 4. Rerun the configure script. NOTE: If you can't get the configure script to recognize the GD libs on your system, get over it and move on to other things. The CGIs that use the GD libs are just a small part of the entire Nagios package. Get everything else working first and then revisit the problem. Make sure to check the nagios-users mailing list archives for possible solutions to GD library problems when you resume your troubleshooting. ******************************************************************** Which package do I want? libgd2-xpm-dev? libgd2-noxpm-dev? php5-gd? I'm not looking to do any image processing myself - I just want to get Nagios working.

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Geek Bike Ride JavaOne 2012

    - by Tori Wieldt
    "Geek Bike Ride?" the clerk at the bike rental shop asked. "Are you guys all from the same company?" "We aren't even from the same country!" we answered. "I'm from Russia." "We're from Germany."  "I'm from Belgium." "I'm from Palo Alto." "I'm from Japan."  "We're from Brazil." "We're from Brazil." "I'm from Sweden." "Coooool" was all she could say. She was right. The Geek Bike Ride was cooool. We had 39 bike riders and one skater show up Saturday for a great route from San Francisco's Fisherman's Wharf, across the Golden Gate bridge, to Saulsalito, and back to the city by ferry. Duke Bike jerseys, sponsored by OTN, were given out. To make sure Java developers got them, each person had to answer a Java question to get a jersey. The questions were really hard, like "Who is the Father of Java?" "What's the biggest Java conference in San Francisco?" The best was when the question was "Name one of Duke's Choice Award winner from this year," and Régina ten Bruggencate answered answered "Me!"  It was foggy throughout the day, with the sun poking out occasionally. The fog was thickest on the bridge, more that one rider commented that we were "in the cloud." It was a great day to meet new friends, and have a chat with old friends. We all had fun, though some of us may more a little more slowly during JavaOne. Ride on!  Photos by permission by Arun Gupta and Yoshio Terada. Thanks, guys!

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by torbengb
    Summary: My machine hangs on shutdown/restart: all windows and the menu bar disappear but the desktop wallpaper remains, and it stays like that without disk activity forever (hours). It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown text. How can I troubleshoot this? Details: I've just received a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. The machine boots okay and works OK including WLAN and audio, but the graphics are not OK. Ubuntu offered to install&activate the current recommended Nvidia driver, but the machine hangs on shutdown/restart which prevents the installation of the proper Nvidia driver. I have to cycle the power to reboot. I ran the Update Manager in the hope that the updates would fix the hang-up. At the end of the update-installation it asked to reboot - and got stuck just like before. I see no obvious cause of the freeze and I don't know if it's caused by graphics problems or anything else. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing restart right after login). How can I determine what is causing the freeze? How can I fix this? I'm frankly rather disappointed because I bought this new machine in the hopes of getting the graphics to work, which failed miserably on my old machine, even though Ubuntu is supposed to be good with Nvidia. Being a fresh convert from Windows, I was hoping for a happier experience this time, so I'm very much looking forward to your suggestions! ... After posting this question, I see related questions in the right sidebar: this, this, and this. Don't know why these didn't show up while I composed by question. Those questions suggest some ACPI settings but I am not experienced enough to find/change those settings. I'll try the sudo shutdown -h now command when I get home and see if that works, then update this question. I did check the system BIOS but didn't see anything out of the ordinary.

    Read the article

  • Extra Life 2012 - The Final Plea ... Until the Next One

    - by Chris Gardner
    I thought I'd share the email stream that my friends and family get about the event.So, here we are again. We scream closer to the event, and the goal is not met.I was approached by the ghost of feral platypii past last night. Well, approached is putting it lightly. I was mugged by the ghost of platypii past last night. He reminded me, in no uncertain terms that I have only reached the midway point of my fundraising goal. He then reminded me, in even less uncertain terms, that we are one week away from the event. There were other reminders past that, but this is a family broadcast. *shudder*Now, let us be serious for a moment. The event organizers claim a personal story helps to tug heart strings, whatever those are...I've been to Children's Hospital of Birmingham. I had to take Spawn, the Latter, there to verify she was not going to die. Instead, she's just a ticking time bomb for the next generation, but I digress.While I was there, I saw things. I saw child after child after child waiting for their appointment. I saw the most sublime displays of children's art juxtaposed with hospital sterilization that I could ever possibly imagine. I saw and heard things that only occur in the nightmares of parents, and I was only in the waiting rooms.But I will never forget the 10-ish year old girl that came in for her regularly scheduled dialysis appointment ... as if it was just another Friday afternoon. She had her school books, a little snack, a book to read for pleasure, and a DVD, in case she finished her homework a little early. You know, everything you'd need for an afternoon hooked up to a huge medical machine that going to clean out all the toxins in your blood. As she entered the secured area, she warmly greeted all the doctors and nurses with the same familiarity that I would greet the staff of my favorite coffee shop as I stopped in for my morning cup of coffee.I don't know the status of that little girl. I don't know if she's healthy or, quite frankly, alive. I don't even know her name, as I only heard it in passing for the 37 seconds our paths crossed. However, I do remember being incredibly moved and touched by her upbeat attitude about the situations, and I hope that my efforts last two Octobers got her, in some way, a little comfort.And, if she is still with us, I hope we can get her a little more.=== PREVIOUS MESSAGE FOLLOWS ===Greetings (Again),If you are receiving this updated message, then you didn't feel generous the first time. Now, I tried to be nice the first time. I tried to send a simple, unobtrusive email message to get you into the spirit. Well, much like the bell ringers that I ignore in front of the Wal-Mart, you ignored me.I probably should have seen that coming...However, unlike those poor souls, I know how to contact you. And I can find out where you live. So, so, so, you better feel lucky that I'm too lazy to terrorize you people, but cause I could do it.Remember, it's not for me, it's for those poor kids... and the feral platypii.  Because, we can make more children, but platypii are hard to come by.=== ORIGINAL MESSAGE FOLLOWS ===It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations...Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samuel that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so.This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.)Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up.You can place your bid at the link below. Feel free to spread the word to anyone and everyone.I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla.Enjoy your burrito.http://www.extra-life.org/participant/cgardner

    Read the article

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • Do we need to adopt a black-box asset our project is inheriting from its predecessor?

    - by Tom Anderson
    Our client has an eCommerce site which was developed by an in-house team, and is now showing its age. I work for a firm brought in as external contractors to build a replacement. Part of the current site is a Flash viewer applet which displays media about the product - zoom-able images, 360-degree views, movies, and so on. We need to show the same media the current site does, so we are simply reusing the viewer. The viewer is embedded on a page in the usual way, and told what media to show by means of an XML file it loads from our server, which is pretty simple for us to generate. We've got this working; it was pretty straightforward. But what else do we need to do? The thing is, as far as we're concerned, the viewer is a binary blob which is served from the client's content-distribution network. We embed it, feed it some XML, and it does its job, but we have no power over its internals. It's completely opaque to us - a black box. We can use it to do what it does, but we can't change it, so if we ever need to do something different, we're stuffed. We're building this site for the client, and when we're done, we'll hand it over for them to maintain. We won't be doing the maintenance ourselves. There's a small team within the client who are working as part of our team, and who will be the ones doing the maintenance. That team only includes one person from the team that built the old site, and it's not someone who knows the image viewer. The people who do know the image viewer are not slated to join our team when our system replaces theirs - they'll be moved to other projects. The documentation on the viewer is extremely thin, and as far as i know doesn't cover the internals at all. My worry is that if someone doesn't take some positive action, all knowledge of the internal workings of the viewer - even down to where the source code for it is - will be lost. It's possible it already has been. Is this something to worry about? If so, whose job is it to worry about it? What should they do about it once they've got worried?

    Read the article

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • Which is the most practical way to add functionality to this piece of code?

    - by Adam Arold
    I'm writing an open source library which handles hexagonal grids. It mainly revolves around the HexagonalGrid and the Hexagon class. There is a HexagonalGridBuilder class which builds the grid which contains Hexagon objects. What I'm trying to achieve is to enable the user to add arbitrary data to each Hexagon. The interface looks like this: public interface Hexagon extends Serializable { // ... other methods not important in this context <T> void setSatelliteData(T data); <T> T getSatelliteData(); } So far so good. I'm writing another class however named HexagonalGridCalculator which adds some fancy pieces of computation to the library like calculating the shortest path between two Hexagons or calculating the line of sight around a Hexagon. My problem is that for those I need the user to supply some data for the Hexagon objects like the cost of passing through a Hexagon, or a boolean flag indicating whether the object is transparent/passable or not. My question is how should I implement this? My first idea was to write an interface like this: public interface HexagonData { void setTransparent(boolean isTransparent); void setPassable(boolean isPassable); void setPassageCost(int cost); } and make the user implement it but then it came to my mind that if I add any other functionality later all code will break for those who are using the old interface. So my next idea is to add annotations like @PassageCost, @IsTransparent and @IsPassable which can be added to fields and when I'm doing the computation I can look for the annotations in the satelliteData supplied by the user. This looks flexible enough if I take into account the possibility of later changes but it uses reflection. I have no benchmark of the costs of using annotations so I'm a bit in the dark here. I think that in 90-95% of the cases the efficiency is not important since most users wont't use a grid where this is significant but I can imagine someone trying to create a grid with a size of 5.000.000.000 X 5.000.000.000. So which path should I start walking on? Or are there some better alternatives? Note: These ideas are not implemented yet so I did not pay too much attention to good names.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >