Search Results

Search found 6701 results on 269 pages for 'year'.

Page 242/269 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • How do you go from a so so programmer to a great one? [closed]

    - by Cervo
    How do you go from being an okay programmer to being able to write maintainable clean code? For example David Hansson was writing Basecamp when in the process he created Rails as part of writing Basecamp in a clean/maintainable way. But how do you know when there is value in a side project like that? I have a bachelors in computer science, and I am about to get a masters and I will say that colleges teach you to write code to solve problems, not neatly or anything. Basically you think of a problem, come up with a solution, and write it down...not necessarily the most maintainable way in the world. Also my first job was in a startup, and now my third is in a small team in a large company where the attitude was/is get it done yesterday (also most of my jobs are mainly database development with SQL with a few ASP.NET web pages/.NET apps on the side). So of course cut/paste is more favored than making things more cleanly. And they would rather have something yesterday even if you have to rewrite it next month rather than to have something in a week that lasts for a year. Also spaghetti code turns up all over the place, and it takes very smart people to write/understand/maintain spaghetti code...However it would be better to do things so simple/clean that even a caveman/woman could do maintenance. Also I get very bored/unmotivated having to go modify the same things cut/pasted in a few locations. Is this the type of skill that you need to learn by working with a serious software organization that has an emphasis on maintenance and maybe even an architect who designs a system architecture and reviews code? Could you really learn it by volunteering on an open source project (it seems to me that a full time programmer job is way more practice than a few hours a week on an open source project)? Is there some course where you can learn this? I can attest that graduate school and undergraduate school do not really emphasize clean software at all. They just teach the structures/algorithms and then send you off into the world to solve problems. Overall I think the first thing is learning to write clean/maintainable code within the bounds of the project in order to become a good programmer. Then the next thing is learning when you need to do a side project (like a framework) to make things more maintainable/clean even while you still deliver things for the deadline in order to become a great programmer. For example, you are making an SQL report and someone gives you 100 calculations for individual columns. At what point does it make sense to construct a domain specific language to encode the rules in simply and then generate all the SQL as opposed to cut/pasting the query from the table a bunch of times and then adjusting each query to do the appropriate calculations. This is the type of thing I would say a great programmer would know. He/she would maybe even know ways to avoid the domain specific language and to still do all the calculations without creating an unmaintainable mess or a ton of repetitive code to cut/paste everywhere.

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • Is changing my job now a wise decision? [closed]

    - by FlaminPhoenix
    First a little background about myself. I am a javascript programmer with 3.8 years of experience. I joined my current company a year and 3 months ago, and I was recruited as a javascript programmer. I was under the impression I was a programmer in a programming team but this was not the case. No one else except me and my manager knows anything about programming in my team. The other two teammates, copy paste stuff from websites into excel sheets. I was told I was being recruited for a new project, and it was true. The only problem was that the server side language they were using was PHP. They were using a popular library with PHP, and I had never worked with PHP before. Nevertheless, I learnt it well enough to get things working, and received high praise from my boss's boss on whichever project I worked on. Words like "wow" , "This looks great, the clients gonna be impressed with this." were sprinkled every now and then on reviewing my work. They even managed to sell my work to a couple of clients and as I understand, both of my projects are going to fetch them a pretty buck. The problem: I was asked to move into a project which my manager was handling. I asked them for training on the project which never came, and sure enough I couldnt complete my first task on the new project without shortcomings. I told my manager there were things I didnt know how to get done in the new project due to lack of training. His project had 0 documentation. I was told he would "take care" of everything relating to those shortcomings. In the meantime, I was asked to switch to another project. My manager made the necessary changes and later told me that the build had "broken" on the production server and that I needed to "test" my changes before saying things were done. I never deployed it on the production server. He did. I never saw / had the opportunity to see the final build before it went to production. He called me for a separate meeting and started pointing fingers at me, but I took full responsibility even if I didnt have to. He later on got on a call with his boss, in my presence, and gave him the impression that it was all my fault. I did not confront him about this so far. I have worked late / done overtime without them asking a lot, but last week, I just got home from work, and I got calls asking me to solve an issue which till then they had kept quiet about even though they were informed about it. I asked my manager why I hadnt been tasked with this when I was in office. He started telling me which statements to put where, as if to mock me, and that this "is hardly an overtime issue" and this pissed me off. Also, during the previous meeting, he was constantly talking highly about his work, at the same time trying to demean mine. In the meantime, I have attended an interview with another MNC, and the interviewers there were fully respectful of my decision to leave my current company. Its a software company, so I can expect my colleagues to know a lot more than me. Im told I can expect their offer anytime this week. My questions: Is my anger towards my manager justified? While leaving, do I tell him that its because of his actions that Im leaving? Do I erupt in anger and tell him that he shouldnt have put the blame on me since he was the one doing the deployment? This is going to be my second resignation to this company. The first time I wanted to resign, I was asked to stay back and my manager promised a lot of changes, a couple of which were made. How do I keep myself from getting into such situations with my employers in the future?

    Read the article

  • Not Attending Oracle OpenWorld? Not to Worry!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In an ideal world, Oracle partners near and far would all be able to partake in our grand OpenWorld event set to unfold on Sunday, September 30th, but we understand that not everyone can make it. What we’re trying to get at here is this Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} – if you're not attending, you're going to be missed! But even if you can't be there in person, you can still watch the OPN Exchange keynotes, general sessions, and more through our live YouTube channel! Isn’t technology wonderful? Just l like last year, Oracle OpenWorld will stream 24x7 via our Oracle YouTube channel. So whether you’re on a bus or a train, at the office or in your favorite recliner, visit http://www.youtube.com/oracle to take part in this “live” Oracle OpenWorld experience. Finally, don’t miss Judson Althoff’s Partner Keynote from 1-3 pm PST on Sunday, September 30th. For more information about the conference—including the keynote schedule— or to register and attend Oracle OpenWorld in person, please visit the Oracle OpenWorld Website. It's Almost Showtime,  The OPN Communications Team

    Read the article

  • 12.10 unable to install or even run from Live CD with nVidia GTX 580

    - by user99056
    I've used Ubuntu in the past (set up as web server, etc over in Iraq), so I'm not a 100% Linux Noob, however, I'm running into a brick wall here. I've got a machine I built when I got back to the US earlier this year, running Windows 7 Ultimate on it, and I've now got some free time and would like to transition over to Ubuntu full time. I've searched around in the forums, and there seems to be an issue with the nVidia graphics cards, so I've tried going to the EVGA site to see if I could find a new BIOS update for it and had no luck, so I'm back searching the forums here again and decided to just go ahead and post my question. My apologies if this is covered in another post and I was just unable to find it. I've found a few 'similar' posts, but nothing as bad as my issue. With the history aside, here is the actual detailed issue: I purchased a new SSD (Intel 520 SSD), arrived today, and I disconnect my old Windows 7 SSD. I had pre downloaded the ubuntu-12.10-desktop-amd64 earlier today and burned it to DVD. Upon inserting the Live CD into the computer and booting up, everything was fine up to the 'Run From Live CD' or 'Install Ubuntu Now' buttons. As I was sure I wanted to go ahead and make the switch, I selected the 'Install Now' from the right hand side. CD Spins up, black window pops up, and then the errors started: date/time GPU Lockup date/time Failed to idle channel 1 date/time PFIFO - playlist update failed date/time Failed to idle channel 2 date/time PFIFO - playlist update failed Thinking it might correct itself, I let it run and it would swap over to a GUI Screen that was locked up with major blurring/etc, then back to the command line with the errors. Eventually it said something along the lines of 'unknown status' and switched back to the GUI and froze. So, that's when I tried to see if I could find a BIOS upgrade for the nVidia GTX580 cards, and had no luck. So I thought, why not try to just run it from the Live CD and see if I can at least get a look at it, maybe if I could get it running try to do some sort of install from there and fix the driver issue. I rebooted, brought up the Live CD, and this time chose the left option / run from the CD. It brought me all the way in to the desktop, I saw my drives, the other icons, could move the mouse, etc for about 30 seconds and then it locked up completely. I've tried this a couple of times and get the same results every time. Hardware: Intel i7-3930K CPU @ 3.2GHz (12 CPUs) / MSI MS-7760 Motherboard / 32GB RAM / 2 x EVGA (nVidia) GeForce GTX 580 (4GB Ram each) So the question is: Is there any way to install 12.10 if you can't even get the Live CD to run (for more than 30 seconds)? My current hardware configuration is both of the GTX 580 cards have an SLI jumper on them, and I have 2 monitors on each card. (Ubuntu info obviously only shows on the main monitor from the failed installation and the attempt at running the Live CD). Perhaps opening the machine back up and removing the SLI Jumper and removing the other 3 monitors (so it only would have 1 video card with one monitor on it) would actually allow me to get 12.10 installed, then I could work on an nVidia Video Driver fix for the GTX 580, and then possibly hook up the other video card and monitors? Or is this something that they are currently aware of and may update with a future release in the next few days/weeks? Any thoughts or suggestions would be greatly appreciated, as I can't even try to fix the issue (assuming it is the nVidia drivers) if I can't even get it to install at all.

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • Firefox 3.5.6 causes entire computer to freeze

    - by Anthony Aziz
    Here's the situation: Environment: Just installed a fresh copy of Win7 Pro 32-bit to NTFS partition on 750GB SATA drive Hardware: E8400 3GHz ASUS P5QL Pro 4GB DDR2 1066 RAM EVGA 9800 GTX+ Plenty of cooling, no problems with hardware before Data is stored on a separate partition, including My Documents No security software is yet installed No extensions installed yet Problem: While using Firefox, sometimes the entire computer will freeze/hang. I get no mouse or keyboard input, can't CTRL+ALT+DEL, no "not responding" indication, just a static image on my display. My drivers are all up to date as far as I'm aware (I just installed this copy of Windows last week). I first noticed this when trying to install Xmarks. I went to the Xmarks site and tried to install and it would freeze. I managed to get it installed (Safe mode and the Mozilla addon site worked), but when I go to configure it (log in, etc), the computer freezes. I don't think it's a matter of usage time or memory issues, because while testing, I browsed wallpaper galleries for about 30 minutes, sometimes as many as 12-15 tabs open at a time, without issue. Sometimes I won't even try to install Xmarks at it will hang. I can install (some) other extensions, the only one I've tried is download status bar (which works). What I've done to try to fix: Restarted (duh) Windows safe mode Completely remove Firefox and install it to a new directory, according to Mozilla's KB (I haven't tried the profile manager, though I assume this does the same thing, except perhaps more thoroughly) Some BIOS changes, including Power options, disabling oveclocking (it was a modest overclock on the CPU, which has run Win7 beta and RC for almost a year now) Memtest Used another Windows user profile, same tragic results I'm STUCK now, with no idea what to do. I'm using Chrome as my main browser at the moment, but that's not something I want to be stuck with. I like Firefox and want to use it. I'm going to try creating a new profile first. One thing I did notice: I started leaving task manager and performance monitor open when anticipating (but dreading) a freeze. firefox.exe had low CPU and low memory, but it looked like overall disk usage was seeing some spikes on the small graph Performance Monitor gives you. I saw on one blog post a fellow using XP moved his Local Settings directory from a separate drive to his main drive, and that solved it, but I don't think my AppData directory is on my D: drive, and that's on the same physical device anyways. Still, something that might be worth trying. I'd extremely appreciate any help. Thanks very much. I really don't want to reinstall Windows from scratch again :( Anthony Aziz

    Read the article

  • No network upsets gnome

    - by Darren Cook
    An issue that has been bothering me for over a year now. My notebook, running ubuntu 10.04, is almost all the time using a wired connection, with static IP address. And a remote DNS server. Network is configured with entries in /etc/network/interfaces and /etc/resolv.conf, rather than whatever the gnome UI tool was (*) But if I'm out, or simply unplug the network cable, a few things get weird. Specifically the gnome-panel stops working - it is still there, but isn't updating. And opening a nautilus window (e.g. to look at files on the local disk) has huge time-outs. By that I mean it will not open the window for something like 30 or 60 seconds; but when it does finally open it I can see the files and it is perfectly usable. Everything else works fine, alt-tab between windows, etc. I use the commandline to find the pid of gnome-panel, kill it, wait a couple of seconds, and it opens up a fresh panel which is normally usable. (Something like 10 minutes later it will have locked/crashed again; the same for the nautilus windows.) I'm guessing this is a DNS issue? Would setting up a local DNS server help? Guess number 2 was related to having a file server mount (samba, though running on another linux box), and symbolic links to files and directories on that file server on my desktop. My question is a bit vague... Does anyone recognize these symptoms, and have a suggestion? Or do you have some troubleshooting suggestions for narrowing down the problem? My /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 127.0.0.1 testsite.local #Other test website URLs here UPDATE: Some timings to open some desktop folder icons. This is after pulling out the network cable. A sub-directory of the desktop took 23 secs to open up. Content appears immediately (just 8 files, it has no further subdirectories). The home directory icon took 12 seconds to open up, but then took about 30 seconds for the files to appear. I closed it and tried again. This time it took 18 seconds to open up, but then 70 seconds before anything appeared. *: I couldn't work out how to use the gnome network tool for my needs, which include 3-4 static IPs for testing virtual hosts locally.

    Read the article

  • VMware Workstation reboot 32-bit host when starting 32/64-bit guest using VT-x

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas? Update 1: I've tried to boot vt.iso provided at http://communities.vmware.com/docs/DOC-8978 and here is it report: CPU 0: VT is enabled on this core CPU 1: VT is enabled on this core Update 2: I've just tried to boot 32-bit guests (Windows7, Ubuntu9.04 and Gentoo) using all possible virtualization modes. In Automatic, Automatic with Replay, Binary translation everything works, in Intel VT-x/EPT or AMD-V/RVI I got message "This host does not support EPT. Using software virtualization with a software MMU." and everything works. BUT in Intel VT-x or AMD-V mode all 32-bit guests reset host just like 64-bit guests! So, this issue is not specific to 64-bit guests. One more thing. Using Intel VT-x or AMD-V mode for both 32/64-bit guests my host reset right after starting VM, i.e. before VM BIOS POST and before guest even start booting. But using Intel VT-x/EPT or AMD-V/RVI VM BIOS runs ok, then 64-bit guests start booting (Windows7 completed "Loading files" progressbar), and only after that host reset.

    Read the article

  • APC UPS replace battery light and apcupsd reporting "replace battery"

    - by mgjk
    We have an APC Smart UPS 1500. The "Replace Battery" light is on, and apcupsd reports: Emergency! Batteries have failed on UPS xxxx. Change them NOW However, from this article, http://sturgeon.apcc.com/kbasewb2.nsf/for+external/f39c4312fcaf7b948525679a005ebb78?OpenDocument it seems that it's not so clear that the UPS battery needs to be replaced. Stranger, according to the information on the UPS, an 11 minute runtime at 42.9% load running at 27.7V isn't so bad. Any thoughts about what to try next? We're a non-profit, money is an object. It would be a shame to replace a battery with a year or so left in it. # apcaccess status APC : 001,041,1017 DATE : Thu Mar 29 13:01:41 EDT 2012 HOSTNAME : oreilly2 VERSION : 3.14.6 (16 May 2009) debian UPSNAME : xxxx CABLE : Custom Cable Smart MODEL : Smart-UPS 1500 UPSMODE : Stand Alone STARTTIME: Thu Mar 29 12:57:30 EDT 2012 STATUS : ONLINE LINEV : 112.3 Volts LOADPCT : 42.9 Percent Load Capacity BCHARGE : 100.0 Percent TIMELEFT : 11.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds OUTPUTV : 112.3 Volts SENSE : High DWAKE : -01 Seconds DSHUTD : 090 Seconds LOTRANS : 106.0 Volts HITRANS : 127.0 Volts RETPCT : 000.0 Percent ITEMP : 23.8 C Internal ALARMDEL : Always BATTV : 27.7 Volts LINEFREQ : 60.0 Hz LASTXFER : No transfers since turnon NUMXFERS : 0 TONBATT : 0 seconds CUMONBATT: 0 seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x07000008 Status Flag SERIALNO : AS0603298896 BATTDATE : 2006-01-14 NOMOUTV : 120 Volts NOMBATTV : 24.0 Volts FIRMWARE : 601.3.D USB FW:1.5 APCMODEL : Smart-UPS 1500 END APC : Thu Mar 29 13:02:12 EDT 2012 Error when running upstest You are using a SMART cable type, so I'm entering SMART test mode mode.type = USB_UPS Setting up the port ... Hello, this is the apcupsd Cable Test program. This part of apctest is for testing Smart UPSes. Please select the function you want to perform. 1) Query the UPS for all known values 2) Perform a Battery Runtime Calibration 3) Abort Battery Calibration 4) Monitor Battery Calibration progress 5) Program EEPROM 6) Enter TTY mode communicating with UPS 7) Quit Select function number: 2 First ensure that we have a good link and that the UPS is functionning normally. Simulating UPSlinkCheck ... YWrote: Y Got: getline failed. Apparently the link is not up. Giving up.

    Read the article

  • Authenticating Apache HTTPd against multiple LDAP servers with expired accounts

    - by Brian Bassett
    We're using mod_authnz_ldap and mod_authn_alias in Apache 2.2.9 (as shipped in Debian 5.0, 2.2.9-10+lenny7) to authenticate against multiple Active Directory domains for hosting a Subversion repository. Our current configuration is: # Turn up logging LogLevel debug # Define authentication providers <AuthnProviderAlias ldap alpha> AuthLDAPBindDN "CN=Subversion,OU=Service Accounts,O=Alpha" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://dc01.alpha:3268/?sAMAccountName?sub? </AuthnProviderAlias> <AuthnProviderAlias ldap beta> AuthLDAPBindDN "CN=LDAPAuth,OU=Service Accounts,O=Beta" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://ldap.beta:3268/?sAMAccountName?sub? </AuthnProviderAlias> # Subversion Repository <Location /svn> DAV svn SVNPath /opt/svn/repo AuthName "Subversion" AuthType Basic AuthBasicProvider alpha beta AuthzLDAPAuthoritative off AuthzSVNAccessFile /opt/svn/authz require valid-user </Location> We're encountering issues with users that have accounts in both Alpha and Beta, especially when their accounts in Alpha are expired (but still present; company policy is that the accounts live on for at a minimum of 1 year). For example, when the user x (which has en expired account in Alpha, and a valid account in Beta), the Apache error log reports the following: [Tue May 11 13:42:07 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14817] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:08 2010] [warn] [client 10.1.1.104] [14817] auth_ldap authenticate: user x authentication failed; URI /svn/ [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue May 11 13:42:08 2010] [error] [client 10.1.1.104] user x: authentication failure for "/svn/": Password Mismatch [Tue May 11 13:42:08 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ Attempting to authenticate as a non-existant user (nobodycool) results in the correct behavior of querying both LDAP servers: [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:40 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://ldap.beta:3268/?sAMAccountName?sub? [Tue May 11 13:42:44 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:44 2010] [error] [client 10.1.1.104] user nobodycool not found: /svn/ [Tue May 11 13:42:44 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ How do I configure Apache to correctly query Beta if it encounters an expired account in Alpha?

    Read the article

  • Emails from Google Apps to custom SMTP server delayed by 1 hour consistently

    - by vimalk
    The outgoing mails from Google Apps/Gmail to our own custom SMTP server are getting delayed by 1 hour consistently. mxtoolbox.com diagnostics of our custom SMTP server are looking OK. Our custom SMTP server is receiving emails from other sources (yahoo, hotmail etc.) on time. Looking at the SMTP logs show a delay in a google intermediate SMTP server. Received: by qwi2 with SMTP id 2so1989393qwi.3 for <[email protected]>; Thu, 27 Jan 2011 03:54:23 -0800 (PST) MIME-Version: 1.0 Received: by 10.224.19.203 with SMTP id c11mr1587082qab.170.1296125657457; Thu, 27 Jan 2011 02:54:17 -0800 (PST) This setup has been working fine for a year though our custom email server was missing a reverse DNS entry and SPF records. Thinking that this could be the cause of the issue, we added these entries a week ago. But the issue still persists. Here are are more details: We are using Google Apps to host our primary domain email (say: mydomain.com) The custom SMTP server (say: s1.mydomain.com) hosts our subdomain (say: sub.mydomain.com) This is how the email log looks from [email protected] to [email protected] Return-Path: [email protected] Received: from localhost.localdomain (LHLO s1.mydomain.com) (127.0.0.1) by s1.mydomain.com with LMTP; Thu, 27 Jan 2011 17:24:28 +0530 (IST) Received: from localhost (localhost.localdomain [127.0.0.1]) by s1.mydomain.com (Postfix) with ESMTP id 605116A6565 for <[email protected]>; Thu, 27 Jan 2011 17:24:28 +0530 (IST) X-Virus-Scanned: amavisd-new at sub.mydomain.com X-Spam-Flag: NO X-Spam-Score: 2.984 X-Spam-Level: ** X-Spam-Status: No, score=2.984 tagged_above=-10 required=6.6 t ests=[AWL=-0.337, BAYES_50=0.001, DNS_FROM_OPENWHOIS=1.13, FH_DATE_PAST_20XX=3.188, HTML_MESSAGE=0.001, HTML_OBFUSCATE_05_10=0.001, RCVD_IN_DNSWL_LOW=-1] autolearn=no Received: from s1.mydomain.com ([127.0.0.1]) by localhost (s1.mydomain.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RBjF7Wwr44mP for <[email protected]>; Thu, 27 Jan 2011 17:24:24 +0530 (IST) Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com [209.85.216.44]) by s1.mydomain.com (Postfix) with ESMTP id BB5DE6A6512 for <[email protected]>; Thu, 27 Jan 2011 17:24:23 +0530 (IST) Received: by qwi2 with SMTP id 2so1989393qwi.3 for <[email protected]>; Thu, 27 Jan 2011 03:54:23 -0800 (PST) MIME-Version: 1.0 Received: by 10.224.19.203 with SMTP id c11mr1587082qab.170.1296125657457; Thu, 27 Jan 2011 02:54:17 -0800 (PST) Received: by 10.220.117.17 with HTTP; Thu, 27 Jan 2011 02:54:17 -0800 (PST) Date: Thu, 27 Jan 2011 16:24:17 +0530 Message-ID: <[email protected]> Subject: test : 16:24 From: X <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0015175cba2865a5fe049ad1c5cd We appreciate any help that could help solve this issue :)

    Read the article

  • Windows 7 boot problem (with colorful blinking smilies)

    - by Ishmael Smyrnow
    I put my computer (Windows 7) to sleep, and a couple hours later, tried to wake it back up, but the monitor wouldn't come back on. I did a hard reset (held power button), but I still couldn't get the monitor to show anything. I plugged it into my laptop, and the monitor works fine. I then swapped out the video card with an older one I have. The monitor came on and started showing the boot process. However, shortly after the Windows 7 animated logo came up, the screen went blank, it made this weird beeping noise, and I seen the strangest thing ever. Small, colorful blocks started to fill my screen, and flash, as if something was loading. Inside of those blocks, were smilies (like the ASCII character kind). This continued for about a minute, then the computer rebooted. It scared the sh!t out of me. I've never had a virus before, and I'm savvy enough to keep myself from one, but I'm wondering if that's what it was. I've been using computers for ages, and never seen anything quite like this. Has anyone ever seen something like this? I'm doing hardware diagnostics before trying to boot into Windows again. Hopefully I can figure this out, but I thought I would consult the SU community while I wait on these results. -- UPDATE -- I did a Memory Diagnostic, which turned up nothing. I also booted into Safe Mode no problem, and scheduled a disk check on both of my drives (I dual boot XP & 7). I was feeling good, and tried putting my regular video card back in, and the monitor won't display anything with it. Also, even though the monitor displays nothing, the system sounds like it's booting up. However, I hear a clicking in one of my hard drives that isn't there with the older video card. Could this be a problem with my hard drive, video card, or PSU? PSU makes sense, except for the fact I've been using the same setup for over a year, and the video card doesn't require it's own power plug thing.

    Read the article

  • KeePass justification

    - by Jeff Walker
    I work at a place that tries to take security seriously, but sadly, they often fail. Currently, one of the major ways they fail is password management. I personally have about 20 accounts (my personal user id on lots of machines). For shared "system" accounts, there are about 45 per environment; development, test, and production. I have access to 2 of those, so my personal total is somewhere around 115 accounts. Passwords have to be at least 15 characters with some extensive but standard complexity constraints, and have to be changed every 60 days or so (system accounts every year). They also should not be the same for different accounts, but that isn't enforced. Think DoD-type standards. There is no way to remember and keep up with this. It just isn't humanly possible, as far as I'm concerned. This might be a good justification of a centralized account management system, a la LDAP or ActiveDirectory, but that is a totally different battle. Currently the solution is an Excel spreadsheet. They use Excel to put a password on it, and then most people make a copy and remove the password. This makes my stomach turn. I use KeePass for this problem and it manages all of my account very well. I like the features like auto-typing, grouping, plugins, password generation, etc. It uses AES-256 encryption via the .Net framework, and while not FIPS compliant, it has a very good reputation. The only problem is that they are also very careful about using randomly downloaded software. So we have to justify every piece of software on our workstations. I have been told that they really don't want me to use this, be cause of the "sensitive nature" of storing passwords. sigh My justification has to be "VERY VERY strong". I have been tasked with writing a justification for KeePass, but as I am lazy, I would like any input that I can get from the community. What do you recommend? Is there something out there that is better or more respected than KeePass? Is there any security experts saying interesting things on this topic? Anything will help at this point. Thanks.

    Read the article

  • Logitech Optical Mouse Frozen In Middle of Windows XP Pro Screen

    - by Code Sherpa
    I have a Logitech Optical Mouse/Keyboard. I have been using them just fine with the system drivers for almost a year now. I recently updated my Kaspersky software and rebooted. Now the mouse is frozen in the middle of my screen. I am not able to login to the Windows XP Pro box that has the frozen mouse (because i can't work the mouse) but am able to remote desktop to this computer. Things I know / have tried: When I boot on the problem computer, I am able to use the keyboard, but not the mouse. I have installed the latest version of Logitech's SetPoint (with the updated drivers) on the problem computer (via remote desktop) and that didn't seem to matter. I bought new batteries for the mouse and that didn't matter. I have tried the mouse/keyboard on another computer and the mouse works just fine there. My suspicion is that the Kaspersky install has overwritten a driver of some sort. Things I have not done (and would appreciate detailed steps if you feel this is the way to go): 1) Uninstalled all the mouse drivers on the machine and reboot. Then, reinstall. Note: When I get to the Device Manager I don't see an option for Human Interface Devices (where the mouse device is). Here are my options: Computer, Disk Drives, DVD/CD-Rom drives, Floppy controllers, IDE ATA/ATAPI, Imaging devices, Network Adapters, Other devices, Ports, Processors, Sound, video, and gaming, System devices, USB controllers. Also, I should point out that Video Controller is the only thing under Other devices and it has a yellow exclamation mark. The same is true for all the items under Universal Serial Bus controllers. I think this means I have to update my BIOS but, since my mouse was working just fine without doing that, I don't think that is my problem. So, how do I get to my Mouse Device? 2) Update my BIOS. Note: As pointed out above, I don't think this matters as my mouse was working just fine under my computer's current BIOS version. Thanks for your help.

    Read the article

  • keyboard mappings are totally screwed after updating to kde4

    - by zeonglow
    I recently upgraded from KDE 3.5 to KDE 4, and I have been having weird issues with my keyboard. In one of the virtual consoles e.g. when I press ctrl + alt 1 , I can type perfectly, but in KDE, several of the number keys don't work, the left and right arrows don't work either. When I press the right arrow key in xev I get this: KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 903459, (111,55), root:(115,836), state 0x10, keycode 114 (keysym 0x1008ff11, XF86AudioLowerVolume), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False When I press the '3' key it toggles my Bookmarks toolbar in Firefox, in xev I get this: KeyPress event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 999968, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 1000032, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False As this is deeper down, changing the type of keyboard in the KDE meun's has no effect. I'm slowly beginning to wade through the mountains of documentation about the X keyboard model, but there has to be a better way. Does anyone no what it is? Edit: 1234567890 ! after deleting the entire .kde folder. but only until I change the Keyboard settings from the "system settings" applet, then its hosed full time. Regardless of what I set the settings too. (restore to default settings doesn't) 2nd Edit: I'm using Gentoo AMD64, I was upgrading from KDE 3.5 KDE 4.2. I think I had manual settings before, although I didn't change anything. I was originally running KDE without HAL until that stop working a year or so ago. The only customisation I made was to set the multimedia keys to control Amarok. 3rd Edit $ grep xkb /var/log/Xorg.0.log (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" Xorg.0.log has this to say: (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. (WW) Disabling Mouse1 (WW) Disabling Keyboard1 My Xorg.conf has this in it. Identifier "Keyboard1" Driver "kbd" Option "AutoRepeat" "500 30" # Specify which keyboard LEDs can be user-controlled (eg, with xset(1)) Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "gb"

    Read the article

  • Run Flyff without elevating user to Admin or requiring Admin Password

    - by AnonJr
    Bottom Line: I need to set up one game on my little sister's laptop to run without requiring an admin password/account. Its the only game that seems to insist on it... so far. Detailed Version: I set up my 14-year-old sister as a regular user on her Windows 7 Home Premium laptop, and almost everything has been fine - until she found a new game (Flyff) that doesn't seem to want to run without an Admin Password (or being logged in as an Admin). For what should be obvious reasons, I'm not going to make her an Admin. or give her the Admin password (which she swears she'll only use to run this game... anyone else buying that? Bueller?) Also, the parents aren't admins on her laptop (they are on their own, but that's another discussion for another day) and I'm not going to set them up as one as I know from past experience that the 3rd time my sister asks them to put in their password, they'll just tell her what it is - at which point I might as well as have just set her up as an admin from the outset. This is a Win7 Home Premium (64-bit, but I doubt that makes a difference) laptop, so using GPEdit is out. I also tried an answer provided in a related (but less specific) question. The app has read/write permissions for its folder in Program Files (x86), yet that doesn't seem to make a difference. I have not yet dug through the registry as mentioned in another answer to the aforementioned question. Just to be thorough, I have checked the "Run as Admin" option on the shortcut's properties to no avail. Am I missing something? Addendum 2010-11-11: Re-Checked permissions as per Joel's answer, and it didn't make a difference. Followed Jane T's suggestion (and Aeo's second) and created a "Games" folder outside Program Files, installing the game there - and making sure regular users had all the permissions they would need. No joy. After the latter of the above two changes, it occurred to me that it may be a UAC issue, so for kicks I turned off UAC - still the damn message. Last item noted: could it be a result of the publisher not being specified/verified? I've been taking a closer look at the error message and it occurred to me that the missing/unverified publisher info could have been the problem all along... Correct me if I'm wrong, but if that's the case, that means there's nothing I can do short of giving her some sort of Admin privileges (i.e. elevating her account, or giving her the password to a separate Admin account) or giving Mom an Admin account.

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • File Not Found error launching Guest OS in a non-administrator user account

    - by ToreTrygg
    Hi, I am running Fusion 2.0.6 (196839) on an iMac running 10.6.2 with 3 user accounts (1 Administrator). I have Fusion set up to share the Guest OS, and it's been working splendidly for nearly a year. Within the Guest OS (Windows XP PRO), there are also 3 user accounts (1 Administrator). Last night I went to back up my VM to an external drive, and to minimize the file size and transfer time, I deleted all Snapshots but the most-recent one. I then backed up the VM externally (28.23 GB). Today, one of my users tried to launch the Guest OS from within her user account, and received the following error message: "File Not Found: Windows XP Pro-000006.vmdk "This file is required to power on this virtual machine. If this file was moved, please provide its new location." My two choices are Cancel and Browse. When I browse, I can locate the Windows XP Pro-000006.vmdk file, which appears to be contained within the VM file (Windows XP Pro.vmwarevm). However, it still won't launch from a non-Admin user account. If I view the package contents of the VM file from the user account, the above file is present and appears to be created upon each launch of the Guest OS. If I go back to my Administrator account on the Mac and then launch Fusion, the Guest OS works perfectly for all 3 user accounts within XP Pro. I have tried to delete the Guest OS from Fusion's Library within the problem user account, then re-connect it to that Library, but the result is the same. The Guest OS data integrity is 100% -- but is accessible only from the OS X Administrator account. This problem only surfaced after deleting several older Snapshots. Again, the data is there, the Guest OS powers up normally in the Mac's Administrator account, but persistently returns the above error when attempting to power on from a non-Admin account on the Mac. I'm not sure how this is affecting the error, but when I look at Hard Disk Settings, the "unable-to-locate" file is the filename of the virtual HD. I don't want to make any changes to my (working) VM without any advice from the knowledgeable people on this forum. Any help will be greatly appreciated, thanks!

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >