Search Results

Search found 22725 results on 909 pages for 'case sensitivity'.

Page 407/909 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • In blender 3D, is it possible to save a keyframe of a mesh that has a soft body property

    - by Steven Rogers
    I am using blender 3D right now, and i 'baked' a cloth soft body. However, i want just one keyframe of the cloth. In this case, i make curtains for a window and made it a cloth. I baked it to just how i want the cloth to look, but for my animation i want a single still cloth object to be placed. I want the curtains to be one still cloth-looking object for the whole animation. So is there a way that i can get that mesh to stay in that one position for the entire animation? If so, then how do i do it?

    Read the article

  • Reusing Web Forms across BPM Roles

    - by Mona Rakibe
    Recently Varsha(another BPM Product Manager) approached me with a requirement where she wanted to reuse same Web Form for different task activity.We both knew this is easily achievable.The human task outcomes can differ to distinguish the submission based on roles.Her requirement was slightly more than this, she wanted to hide some data based on the logged in user. If you have worked on Web Form rules, dynamically showing and hiding data is common requirement and easily achievable using Form Rules. In this case the challenge was accessing BPM role inside the Web Form. Although, will be addressing this requirement in future release she wanted a immediate solution(Aha, after all customers are not the only one's who can not wait). Thankfully we managed to come-up with a solution and I hope this will be helpful to larger audience. Solution has 3 steps : Step 1: We added a hidden attribute in our form (Role). The purpose of this attribute is just to store the current logged in user's role and we pass the value during data association. Step 2 : In your data association step, pass the role value based on the Swimlane Step 3 : Now use this hidden attribute value in your Web Form rule for dynamic behavior Detailed steps and sample can be downloaded from Java.net.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF - Automatically create ServiceHost for multiple services

    - by Rajesh Pillai
    WCF - Automatically create ServiceHost for multiple services Welcome back readers!  This blog post is about a small tip that may make working with WCF servicehost a bit easier, if you have lots of services and you need to quickly host them for testing. Recently I was encountered a situation where we were faced to create multiple service host quickly for testing.  Here is the code snippet which is pretty self explanatory.  You can put this code in your service host which in this case is  a console application. class Program   {       static void Main(string[] args)       { // Stores all hosts           List<ServiceHost> hosts = new List<ServiceHost>();           try           { // Get the services element from the serviceModel element in the config file               var section = ConfigurationManager.GetSection("system.serviceModel/services") as ServicesSection;               if (section != null)               {                   foreach (ServiceElement element in section.Services)                   { // NOTE : If the assembly is in another namespace, provide a fully qualified name here in the form // <typename, namespace> // For e.g. Business.Services.CustomerService, Business.Services                       var serviceType = Type.GetType(element.Name); // Get the typeName                        var host = new ServiceHost(serviceType);                       hosts.Add(host); // Add to the host collection                       host.Open(); // Open the host                   }               }               Console.ReadLine();           }           catch (Exception e)           {               Console.WriteLine(e.Message);               Console.ReadLine();           }           finally           {               foreach (ServiceHost host in hosts)               {                   if (host.State == CommunicationState.Opened)                   {                       host.Close();                   }                   else                   {                       host.Abort();                   }               }           }       }   } I hope you find this useful.  You can make this as a windows service if required.

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Location of development solutions on disk - Common or upto the individual

    - by dreza
    In our team meeting today a senior member brought up the proposal that we should be having a common location/structure for our development solutions. A couple of his points were: Making it common meant when talking about projects and emailing stuff everyone is on the same wavelength and knows where to look. If there is ever the need to hard code a location path then it will work across all developers pc's. He had a more few points to back up his suggestion but I unfortunately got distracted during the discussion and so didn't hear all of them. I have no issue with the idea and can see it's merits but I was wondering if it is common or even recommended that all developers place their code in the same folder structure. Or do developers like to have the flexibility of location solutions where-ever they want? We currently use SVN for our version control. In this case his recommendation was to place all code in: c:\Work\Development\<Customer>\<project>\Code\<solution>\ the code I guess actual path is irrelevant for this question but added for completeness.

    Read the article

  • What stops HTML5 and JS apps to perform as good as native apps?

    - by Amogh Talpallikar
    From what I understand, HTML is a mark-up language, so is the content of XAML, XIB and whatever Android uses and other native UI development frameworks. JavaScript is a programming language used along with it to handle client side scripting which will include things like event handling, client side validations and anything else C#,Java,Objective-C or C++ do in various such frameworks. There are MVC/MVVM patterns available in form frameworks like Sencha's, Angular etc. We have localStorage in form of both sqlite and key-value store as other frameworks have and you have API specification for almost everything that it missing. Whenever a native UI frameworks has to render UI , it has to parse a similar the markup and render the UI. Question break-down What stops from doing the same in HTML and JS itself ? Instead of having a web-control or browser as a layer in between why can't HTML(along with CSS) and JS be made to perform the same way ? Even if there is a layer,so does .net runtime and JVM are in other cases where C++,C are not being used. So Lets take the case of Android, like Dalvik, why Can't Chromium be another option(along with dalvik and NDK) where HTML does what android markup does and JavaScript is used to do what Java does ? So the Question is, Even if current implementations aren't as good, but theoretically is it possible to get HTML5 based applications to work as other native apps specially on mobile ?

    Read the article

  • how can i use apache log files to recreate usage scenario

    - by daigorocub
    Recently i installed a website that had too many requests and it was too slow. Many improvements have been made to the web site code and we've also bought a new server. I want to test the new server with exactly the same requests that made the old server slow. After that, i will double the requests, make new tests and so on. These requests are logged in the apache log files. So, I can parse those files and make some kind of script to make the same requests. Of course, in this case, the requests will be made only by my computer against the server, but hey, better than nothing. Questions: - is there some app that does this already? - would you use wget? ab? python script? Thanks!

    Read the article

  • Deleting Unused Swaps Partions

    - by Nikita Kononov
    Good evening everyone , I got a little issue with Swap Partitions. Due to some issues after installing Ubuntu first time, I reinstalled it and now I have 3 Swaps. Here is sudo fdisk -l result Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xaa9693fe Device Boot Start End Blocks Id System /dev/sda1 2048 52430847 26214400 1c Hidden W95 FAT32 (LBA) /dev/sda2 * 52430848 540677076 244123114+ 7 HPFS/NTFS/exFAT /dev/sda3 540678142 1465147391 462234625 5 Extended Partition 3 does not start on physical sector boundary. /dev/sda5 1452750848 1465147391 6198272 82 Linux swap / Solaris /dev/sda6 1440352256 1452742655 6195200 82 Linux swap / Solaris /dev/sda7 540678144 1427951615 443636736 83 Linux /dev/sda8 1427953664 1440339967 6193152 82 Linux swap / Solaris So Swaps in /dev/sda5 and /dev/sda6 are no longer in use as far as I understand and thus I was planning to delete them, however faced a problem. What I did is download and burn Gparted Live CD and boot it up, tried to delete those partitions but I have no idea how to add 12GB unallocated memory to the existing OS partition in this case to /dev/sda7 Is there anyway I can delete 2 swaps and extend unallocated memory to /dev/sda7 partion? Thank you in advance!

    Read the article

  • How do I revert back to official Linksys firmware from dd-wrt on WRT56G2 v1?

    - by Chris Moore
    I've been having trouble with dd-wrt on my Linksys WRT56G2 v1 router and want to go back to the stock Linksys firmware for it. The router has only 2MB of flash memory, and so I'm running the 'micro' version of dd-wrt. My question is what is the best way to do that? I could use the http://router/Upgrade.asp dd-wrt "firmware upgrade" web interface to do it, in which case there's a dropdown menu choice for "After flashing, reset to": "don't reset" or "reset to default settings". Which should I pick? Some people say that I should use a program called tftp.exe instead. I can probably gain access to a Windows machine if this is necessary. Which of these is the way to proceed? I don't want to brick the router if at all possible! Note: I used the 'wrt54g' tag because I wasn't allowed to create a 'wrt54g2' tag due my low rep here.

    Read the article

  • Can a webite have too many bindings?

    - by justSteve
    IIS7.x on a win08 web version on a dedicated server. I have a site that's serving a few dozen affiliates - many of which are hitting me via a subdomain from their own root domain - all of which have a subdomain specific to their account. E.G. my affiliate named 'Acme' hits my site via: myApp.Acme.com (his root, my app) Acme.MyDomain.com (his account within my root domain) Currently I'm adding each of these as a binding entry in IIS (targeting a discrete IP, not '*'). As I ramp this up to include more affiliates I'm wondering if I should be concerned about how many binding this site handles. Proabaly, in Acme's case I can do without the 'Acme.MyDomain.com' because, in reality, all traffic takes place via myApp.Acme.com. Mine is a niche site - very volume compared to most. At what point do I worry about all those bindings? thx

    Read the article

  • e-mail no longer coming to aol inbox since I opened a Hotmail account [closed]

    - by Dave
    I recently set up an email account with Hotmail. I have used AOL e-mail for years and would like to keep this as my main source for sending/receiving e-mail. When I set up the Hotmail account there was an option to have all my e-mail accounts forwarded to the Hotmail inbox. I set up this option and the page promised that all incoming mail would still be visible in the original inbox, in this case my AOL account. Well everything does come to the Hotmail account, the only problem is that I no longer receive these e-mails in my primary/desired location which is AOL. I went back and made sure the box was checked that provided the option through Hotmail that said "leave a copy of my messages on the server" (AOL) How do I get the e-mail that is being forwarded to Hotmail from my AOL account to still be visible in AOL, it should be in AOL inbox as that is the original destination but it is gone. Suggestions?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • Keep getting smbd errors, but apport asks questions I can't answer

    - by Steve Kroon
    What I want to know below is where to bug report the poor series of questions I'm being asked... Every time I reboot (at least), I get a crash dialog ("Sorry, Ubuntu 12.04 has experienced an internal error"). Clicking on show details shows the problem is with smbd, but the rest of the trace does not appear. When I wish to continue to send a bug report, I am told I will be asked a series of questions in a window titled "Apport". The first question asks: How would you best describe your setup? -I am running a Windows File Server -I am connecting to a Windows File Server. Since I am doing both, I have no idea which to choose. In any case, selecting one leads to the next question: Did this used to work properly with a previous release? -No -Yes But I never tried to use Samba in a previous release, so I can't seriously answer this. After picking an option here, I get more questions in a similar vein. Surely there should be "I don't know/I haven't tried" options, even if they simply mean you can't submit a useful bug report.

    Read the article

  • Terminal closing itself after 14.04 upgrade

    - by David
    All was fine in 12.04, in this case I'm using virtualbox in Windows. Last days the warning message about my Ubuntu version no longer being supported was coming up pretty often, so, yesterday I finally decided to upgrade. The upgrading process ran ok, no errors, no warnings. After rebooting the errors started to happen. Just after booting up there were some errors about video, gnome, and video textures (sorry I didn't care in that moment so I don't remember well). Luckly that went away after installing VirtualBox additions. But the big problem here is that I can't use the terminal. It opens Ok when pressing control+alt+t, but most of the commands cause instant closing. For example, df, ls, mv, cd... usually work, although it has closed few times. But 'find' causes instant close. 'apt-get' update kills it too, just after it gets the package list from the sources, when it starts processing them. I've tried xterm, everything works and I have none of that problems. I have tried reinstalling konsole, bash-static, bash-completion, but nothing worked. I have no idea what to do as there is no error message to search for the cause. It seems something related to bash, but that's all I know.

    Read the article

  • Do support sites like Stack Overflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the Stack Exchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, Stack Overflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc?

    Read the article

  • Google Reader can read this RSS feed, why can't my RSS reader?

    - by Nicole
    I have a website that I would like to subscribe to via my RSS reader. The website itself doesn't publicize its RSS feed, however when I used Google Reader it was able to find one, and it works perfectly well. Google Reader cites http://www.stratfor.com/rss.xml as the address for the RSS feed. However, when I try to enter that feed, it says "Page not found". I suspect that this is because that file does exist on the website, but is forbidden; and that for some weird reason google reader has access to it. Is that the case? Anyways, I would really like my own RSS reader to be able to subscribe to it, because it has functionalities that Google reader does not. Besides, it intrigues me -- how come Google reader can read it, but other RSS reader cannot?

    Read the article

  • Permissionless external drive with NTFS

    - by user12889
    I have an external hard disk which has 1 partition, formatted in NTFS. I use this drive on multiple computers with a different logins on different machines, Windows XP and Windows 7. All files are plain old files, not OS encrypted or compressed. Every now and then Windows 7 does not let me access some files, citing permission problems. I can circumvent this per case by taking ownership and setting appropriate permissions. This, however, is tedious. Is there a simple way to tell Windows to not enforce or store any permissions on any file/directory on a partition?

    Read the article

  • How to delete just one LINE of text (NOT a table-row!) with a single KEYBOARD shortcut in Microsoft Office Word 2010?

    - by Sk8erPeter
    Are there any shortcuts to delete just one row (which is NOT a table row, just a single row in a text) in Microsoft Office Word 2010? If not, how can I assign one to do it? In worst case, can I make a macro (in VB) which could do the same with a custom shortcut? To clarify my problem: I would like to avoid multiple clicks and/or pushing multiple buttons, even if I click in the middle of the line of text. :) For example, in Notepad++ I can delete the entire current line with Ctrl+L, in NetBeans, I can delete an entire line with Ctrl+E, in Eclipse, I can delete current line with Ctrl+D, etc., where it doesn't really matter where my mouse cursor is actually... so there are these simple solutions, which I look for in Word too. It really would simplify my work in huge documents.

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • ubuntu 12.04 desktop Error: unknown command 'gfxmode'. Pressing any key continues

    - by Andy
    Premise: linux noobie here, I have the same issue as OP: fresh 12.04 desktop, changed grub with grub customizer, now I get a: unknown command 'gfxmode' press ...etc was asked to "re-post" this question and link to this thread which I refer to above. I have tried what Tarek said, and nothing seems to work, I find two lines with gfxmode: function gfxmode { gfxmode \$linux_gfx_mode Note: not sure if it matter but in the error the two single quotes before gfxmode are not the same, the first is a slanted quote mark, the second (after gfxmode) is a straight one. I commented out the whole line, I tried to add 'set' before gfxmode, neither did any difference. I found another place that said to remove the line from another file 40_custom, but I checked and those files do not contain anything relating to the line we are looking for: gfxmode $linux_gfx_mode Not sure what I am missing, but the file linux.save has recently appeared when searching for the line. Not sure if its just a temp file of some kind. In any case I cannot seem to get it, what am I missing? Thanks! P.S. sorry for any messups in form :)

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >