Search Results

Search found 24731 results on 990 pages for 'corner case'.

Page 454/990 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Setting the Timezone with an automated script

    - by Tom
    I'm writing scripts to automate setting up new slicehost installations. In a perfect world, after I started the script, it would just run, with no attention from me. I have succeeded, with one exception. How do I set the timezone, in a permanent (survive reboot) and sane (adjust for standard and daylight savings time, so no just forcing the date) ... manner that doesn't require input from me? Currently, I'm using dpkg-reconfigure tzdata This doesn't seem to have any way to force parameters into it. It demands user input. EDIT: I'm editing here, rather than commenting, since comments don't seem to allow code blocks. Here's the actual code I ended up with, based on Rudedog's comment below. I also noticed that this doesn't update /etc/timezone. I'm not certain who uses that, but in case anybody does, I'm setting that too. TIMEZONE="America/Los_Angeles" echo $TIMEZONE > /etc/timezone cp /usr/share/zoneinfo/${TIMEZONE} /etc/localtime # This sets the time

    Read the article

  • IP KVM switch, or serial console box for remote admin?

    - by grahzny
    We have a small server farm (11 now, may add more in the future) of HP Proliant DL160 G6s. They all run either Linux (server only, no X11) or VMware ESX. We had intended to get models with iLO, in case BIOS-level remote admin became an issue, but that didn't happen. I had an IP KVM switch recommended to me (along with some sort of Remote Reboot hardware.) I've since realized that none of our machines need GUI administration, so perhaps a serial console switch would be a cheaper and more appropriate option. Something like this: http://www.kvm-switches-online.com/serimux-cs-32.html Do you folks have an opinions on which way is a better choice? Should we go for the ease of setup (plug and go, instead of turning on the feature in the BIOS and making sure the serial settings are correct) and the flexibility of an IP KVM switch even with the extra cost? Or is a serial console switch just fine?

    Read the article

  • Avoid unwanted path in Zip file

    - by jerwood
    I'm making a shell script to package some files. I'm zipping a directory like this: zip -r /Users/me/development/something/out.zip /Users/me/development/something/folder/ The problem is that the resultant out.zip archive has the entire file path in it. That is, when unzipped, it will have the whole "/Users/me/development/anotherthing/" path in it. Is it possible to avoid these deep paths when putting a directory into an archive? When I run zip from inside the target directory, I don't have this problem. zip -r out.zip ./folder/ In this case, I don't get all the junk. However, the script in question will be called from wherever. FWIW, I'm using bash on Mac OS X 10.6.

    Read the article

  • Ubuntu 13.10 AMD/ATI proprietary driver slow boot time, black screen after logging in and lengthy login/logout delays

    - by NahsiN
    Ubuntu 13.10 is causing me major headaches with my AMD/ATI HD 5770 GPU. Below is a list of problems I am currently encountering. 1) The boot time is extended by at least 25s after installing catalyst 13.4. Using open source radeon drivers, my boot time till the login screen is ~10s. With catalyst 13.4 installed, the boot time increases to ~35s. This was not the case in Ubuntu 13.04, 12.10 or 12.04. I have done the driver installation manually (instructions from wiki.cchtml.com) and using software center and there is no difference. I have not tried the catalyst 13.8 beta driver. 2) After manual installation of catalyst 13.4, I get stuck at a black screen after logging in. I have to purge fglrx to resolve the problem. I tried sudo amdconfig --initial -f but it didn't help. 3) The delay between logging in and unity being displayed is ~10-15s for BOTH open source and proprietary drivers. During the delay, it's just a black screen. Whenever I logout, there is again a ~10-15s delay with the login screen appearing stuck before lightdm allows me to enter my password again. This is ridiculous! Yes, I could stick with open source radeon drivers but I would like to install Steam and play my Valve collection on the machine. Is anybody else encountering similar issues?

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Rate-Limit affects All clients or single IP?

    - by Asad Moeen
    Well up-til now I've considered iptables rate-limit commands with the "recent" module to work for each IP Address. For example rate-limit rule of 20k/s will trigger only if a single IP exceeds 20k/s rate and not if 4 different IPs exceed 5k/s rate. Please correct me if I considered this wrong as I've only used these rules for TCP/ UDP. But today I tried similar rules for ICMP and applied 4/s Input/Output. But then on trying to ping-test from just-ping.com I could see packet loss on almost all IP Addresses. How could that happen because if it worked for each IP Address then it wouldn't be triggering the rule because I believe each IP from just-ping has a rate of probably 1/s. I still think the first one is true because if it wasn't then my GameServer would block everyone if the combined rate ( in case of more connected players ) increased the threshold. This hasn't happened up til now so the ICMP thing really confused me. Thank you.

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • Help with IPTables - Masquerading + Forwarding, 1-to-1?

    - by Artiom Chilaru
    I've got a clean Ubuntu Server 10.10 with OpenSSH, OpenVPN and vsFTPd installed. The server is running as a VM on the Hyper-V server (hypervisor), has two network interfaces mapped to physical adapters (eth0 and eth1), and a virtual interface with a direct connection to the hypervisor (eth2). The VPN will create a tun0 interface when a client connects. What I want is the remote user, connecting over VPN to be able to connect to the hypervisor (all ports, ping etc). The initial idea was to make the VPN create a tap0 interface, and bridge eth2 to tap0, but this didn't work, unfortunately, as it seems that the adapters don't want to go into promiscuous mode (partially confirmed by MS) At the same time, both the hypervisor and the remove client over VPN can successfully ping/connect to the ubuntu server with no problems. So my plan right now is to try doing some 1-1 masquerading, if possible. Basically, I want every request sent from the VPN client to the ubuntu server to be redurected to the hypervisor instead (with IP translation ofc), and every request from the hypervisor to the ubuntu machine sent to the VPN client (IP translated too). Only 1 client will be connected at a time to the VPN, so I can force limit it to a single IP at all times, if necessary. Is this the right way to go, and if true, how can this be achieved? It's almost like a special case of port-forwarding, except every single port on tun0 is forwarded to a machine in eth2, and every port on the eth2 side forwards to an ip on tun0 I guess it could be done with iptables, but I'm rather new in linux, so I can't do it myself... help? :(

    Read the article

  • How do I transition from WUBI to a native installation?

    - by Sammy Black
    I have Ubuntu 10.04 Lucid installed through wubi on my laptop (it came with Windows 7 preinstalled). This was my first foray into Linux, and I'm here to stay. I have no use for Windows, and yet I must manually choose not to boot into it! Should I shrink the Windows partition to something negligible and grow the Linux one using something like gparted or fdisk, and just be content that everything runs? In that case, I need to understand the filesystems. Which is which? Here's the output of $ df -h: Filesystem Size Used Avail Use% Mounted on /dev/loop0 17G 11G 4.5G 71% / none 1.8G 300K 1.8G 1% /dev none 1.8G 376K 1.8G 1% /dev/shm none 1.8G 316K 1.8G 1% /var/run none 1.8G 0 1.8G 0% /var/lock none 1.8G 0 1.8G 0% /lib/init/rw /dev/sda3 290G 50G 240G 18% /host I would prefer to start over with a clean install of 10.10 Maverick, but I fear what I may lose. Certainly, I will backup my home directory tree (gzip?), but what about various pieces of software that I've acquired from the repositories? Can I keep a record of them? By the way, I asked a similar question over on Ubuntu forums.

    Read the article

  • CLR 4.0: Corrupted State Exceptions

    - by Scott Dorman
    Corrupted state exceptions are designed to help you have fewer bugs in your code by making it harder to make common mistakes around exception handling. A very common pattern is code like this: public void FileSave(String name) { try { FileStream fs = new FileStream(name, FileMode.Create); } catch (Exception e) { MessageBox.Show("File Open Error"); throw new Exception(IOException); } The standard recommendation is not to catch System.Exception but rather catch the more specific exceptions (in this case, IOException). While this is a somewhat contrived example, what would happen if Exception were really an AccessViolationException or some other exception indicating that the process state has been corrupted? What you really want to do is get out fast before persistent data is corrupted or more work is lost. To help solve this problem and minimize the chance that you will catch exceptions like this, CLR 4.0 introduces Corrupted State Exceptions, which cannot be caught by normal catch statements. There are still places where you do want to catch these types of exceptions, particularly in your application’s “main” function or when you are loading add-ins.  There are also rare circumstances when you know code that throws an exception isn’t dangerous, such as when calling native code. In order to support these scenarios, a new HandleProcessCorruptedStateExceptions attribute has been added. This attribute is added to the function that catches these exceptions. There is also a process wide compatibility switch named legacyCorruptedStateExceptionsPolicy which when set to true will cause the code to operate under the older exception handling behavior. Technorati Tags: CLR 4.0, .NET 4.0, Exception Handling, Corrupted State Exceptions

    Read the article

  • What is the best practice for when to check if something needs to be done?

    - by changokun
    Let's say I have a function that does x. I pass it a variable, and if the variable is not null, it does some action. And I have an array of variables and I'm going to run this function on each one. Inside the function, it seems like a good practice is to check if the argument is null before proceeding. A null argument is not an error, it just causes an early return. I could loop through the array and pass each value to the function, and the function will work great. Is there any value to checking if the var is null and only calling the function if it is not null during the loop? This doubles up on the checking for null, but: Is there any gained value? Is there any gain on not calling a function? Any readability gain on the loop in the parent code? For the sake of my question, let's assume that checking for null will always be the case. I can see how checking for some object property might change over time, which makes the first check a bad idea. Pseudo code example: for(thing in array) { x(thing) } Versus: for(thing in array) { if(thing not null) x(thing) } If there are language-specific concerns, I'm a web developer working in PHP and JavaScript.

    Read the article

  • When is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • .htaccess redirect for www in parent folder and children react

    - by ServerChecker
    We were having a problem with the Norton seal not showing up on our affiliate marketing landing pages (landers). Turns out, the Norton seal was super picky about the "www." prefix. I had folder paths like /lp/cmpx where x was a number 1-100 and indicated advertising campaign number. So, to remedy this, I stuck this in my .htaccess file right after the RewriteEngine On line: RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/cmp1/$1 [R=302,QSA,NC,L] Trouble is, I had to do that under every campaign folder, changing cmp1 to whatever the folder name was. Therefore, my question is... Is there a way I can do this with an .htaccess file under the parent folder (/lp in this case) and it will work for each of the children? EDIT: Note that I stuck an .htaccess file in /lp just now to test: RewriteEngine On RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/$1 [R=302,QSA,NC,L] This yielded no effect to the /lp/cmpx folders underneath, to my dismay.

    Read the article

  • What was your most expensive computer rig?

    - by AlbertoPL
    I'm curious as to how much people are willing to spend on a typical computer. My most expensive machine is a gaming rig complete with an ATI Radeon HD4850, Wolfdale 3.0 GHZ Intel Core 2 Duo, 500 Gb hard drive, Antec 900 computer case, the works. I also have a 2-monitor set up. I'd have to say this thing has cost me a little more than a grand at this point, and I'd put the total value of the components at roughly $1300. So, how far have you gone with your computer rigs and has it been worth it?

    Read the article

  • Should I be using Lua for game logic on mobile devices?

    - by Rob Ashton
    As above really, I'm writing an android based game in my spare time (android because it's free and I've no real aspirations to do anything commercial). The game logic comes from a very typical component based model whereby entities exist and have components attached to them and messages are sent to and fro in order to make things happen. Obviously the layer for actually performing that is thin, and if I were to write an iPhone version of this app, I'd have to re-write the renderer and core driver (of this component based system) in Objective C. The entities are just flat files determining the names of the components to be added, and the components themselves are simple, single-purpose objects containing the logic for the entity. Now, if I write all the logic for those components in Java, then I'd have to re-write them on Objective C if I decided to do an iPhone port. As the bulk of the application logic is contained within these components, they would, in an ideal world, be written in some platform-agnostic language/script/DSL which could then just be loaded into the app on whatever platform. I've been led to believe however that this is not an ideal world though, and that Lua performance etc on mobile devices still isn't up to scratch, that the overhead is too much and that I'd run into troubles later if I went down that route? Is this actually the case? Obviously this is just a hypothetical question, I'm happy writing them all in Java as it's simple and easy get things off the ground, but say I actually enjoy making this game (unlikely, given how much I'm currently disliking having to deal with all those different mobile devices) and I wanted to make a commercially viable game - would I use Lua or would I just take the hit when it came to porting and just re-write all the code?

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • Disable laptop's display on boot when used with external display

    - by Ryan
    I keep my laptop tucked away and solely use an external display with it via HDMI. In Windows 7 display settings, I have it set up to "Show desktop only on 2 [my external display]" This works fine in all cases except when I boot the laptop when the external display is already connected. In that case, the laptop's display stays on and sticks at the Windows 7 boot logo unless I manually shut the display off. (I should mention that while the laptop's display is stuck at the boot logo, the external monitor and computer are running just fine.) The laptop is an Asus N56VZ with Nvidia 650m graphics and the latest drivers. I've checked Nvidia's control panel as well as the BIOS and nothing looked very promising. Any ideas as to how I can get my laptop screen to shut itself off after booting into Windows?

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • how to find a text string which may be present in some unknown file in entire filesystem

    - by Registered User
    I am stuck up with a problem I have a line 'something' in some file. In which file is this line that I have forgotten. In the entire root file system I would like to find out which file and where is this line. So how can I go for this.I have used find but when I used find then I knew the name of file in this case I do not know name of file also. It is a Ubuntu server 10.04 So what can I do to find out which file has this string.

    Read the article

  • How does the ? make a quantifier lazy in regex

    - by Uriel Katz
    I've been looking into regex lately and figured that the ? operator makes the *,+, or ? lazy. My question is how does it do that? Is it that *? for example is a special operator, or does the ? have an effect on the *? In other words, does regex recognize *? as one operator in itself, or does regex recognize *? as the two separate operators * and ?? If it is the case that *? is being recognized as two separate operators, how does the ? affect the * to make it lazy. If ? means that the * is optional, shouldn't this mean that the * doesn't have to exists at all. If so, then in a statement .*? wouldn't regex just match separate letters and the whole string instead of the shorter string? Please explain, I'm desperate to understand.

    Read the article

  • How to Approach CentOS Back Up on GoDaddy Dedicated Hosting

    - by Scott
    Does anyone have any experience with backing up a dedicated server at GoDaddy or anywhere else? I have a CentOS system. I recently made a big newbie mistake working on linux and toasted my server. I had to start over from scratch b/c I damaged it so bad. GoDaddy says I need to handle it all myself b/c I am not paying them for back ups. Does anyone have any idea on how to approach this back up? I'm not sure how a backup on dedicated hosting would be different than a normal linux back up. In any case, I don't know how to do normal linux backups.

    Read the article

  • How do I open a 60M png image on OSX

    - by Topener
    Alright, so I've been looking around on this site on how to open a big PNG image. The question I found was about a 10M png. Xee apparently did the job. So, I downloaded Xee for my 60M file, but it crashes. So does iPhoto, Pixelmator and previewer. In the Pixelmator and Xee case, I actually had to kill the computer, and restart it. It crashed so 'hard' I couldn't get it to respond again. How do I open this file? (and zoom it) Specs: newly acquired macbook pro: 4gb memory, 2.3ghz i7 58M png image approx: 15000x30000 pixels

    Read the article

  • Ubuntu 12.04 - Brightness controls not working

    - by Juan Manuel Zolezzi Volpi
    Controls from "Brightness and Lock" were not working so I've tried a solution that involved changing grub, which I'm detailing below: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX="quiet splash acpi_backlight=vendor" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" After doing this, the brightness control dissapeared like you can see at http://screencloud.net/img/screenshots/6b90d56604b70cc749a632d0bc005a20.png Any ideas? Would love to be able to configure Brightness or even use apps like F.lux to regulate it automatically. Edit: I've modified the following line to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=" and now the brightness controls are back, but whatever I change the brightness remains the same. Just in case I'm using Intel H77

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >