Search Results

Search found 17437 results on 698 pages for 'nick long'.

Page 377/698 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Optimize Windows file access over network

    - by Djizeus
    At my company I frequently need to access shared files over a Windows network. These files are located on the other side of the planet, so I guess the file share goes through some kind of VPN over Internet, but I don't control this and it is supposed to be "transparent" for me. However it is extremely slow. Displaying the content of a directory in the file explorer takes about 10s. Even if over the Internet, I did not expect that retrieving a list of file names would be that long. Are there any settings to optimize this from my Windows XP workstation, or is it mostly related to the way the network is configured? The only thing I have found so far is to cache all file names, while by default only short file names are cached (http://support.microsoft.com/kb/843418).

    Read the article

  • Ubuntu Wireless not working on Lenovo t400

    - by VmaxBoss
    This problem started after upgrading to 12.04, an my system is 'up2date' Have tried most of the solution-proposals found on the net. lspci -nnk | grep -iA2 net 00:19.0 Ethernet controller [0200]: Intel Corporation 82567LF Gigabit Network Connection [8086:10bf] (rev 03) Subsystem: Lenovo Device [17aa:20ee] Kernel driver in use: e1000e 03:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlagn iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off sudo lshw -C network *-network description: Ethernet interface product: 82567LF Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:22:68:1a:c4:75 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.0.2-k2 duplex=full firmware=1.8-3 ip=192.168.2.154 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:29 memory:fc000000-fc01ffff memory:fc024000-fc024fff ioport:1820(size=32) *-network DISABLED description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:6c:2d:24 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:30 memory:f4300000-f4301fff Please help Br/VB

    Read the article

  • Altering specific configuration values in the On-Screen Keyboard of Windows 7

    - by fnst
    I have to use the On-Screen Keyboard of Windows 7 for typing: (In case you haven't used it yet, you can get it via: All Programs - Accessories - Ease of Access - On-Screen Keyboard or simply search for "osk.exe") It offers a feature to "hover" about the buttons. Microsoft describes it as the following: In hovering mode, you use a mouse or joystick to point to a key for a predefined period of time, and the selected character is typed automatically. There is my specific problem. The predefined period of time is too long to be useful for me. The minimum amount of time is 0.5 seconds (max. 3 seconds). Is there any way to alter this value to something < 0.5? For example via editing the registry? Edit: The entry *'HKEY_CURRENT_USER\Software\Microsoft\Osk\HoverPeriod'* cannot be set lower than 500 ms. Any helpful tip would be great!

    Read the article

  • netgear GS108TV2 RSTP configuration

    - by jhowland
    I have a large set of GS108TV2 units--my goal is to set up a network which is comprised of several loops for redundancy/fault tolerance. I have a minimal 3 switch loop configured, with RSTP enabled on two ports on each switch. I have my bridge max age set to 6, and my bridge forward delay set to 4, which are the minimum values allowed. Hello time is fixed at 2 seconds. The switches respond to a cable being removed from a socket, but it takes too long. I cannot get the switch to respond to a loss of connection on one of the redundant ports in less than 20 seconds. Is there any way to configure these switches to respond faster than 20 seconds? That is unacceptable for my application. thanks in advance for any help

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why can't email clients create rules for moving dates like "yesterday"?

    - by Morgan
    I've never seen an email client that I could easily create a rule to do something like "Move messages from yesterday to a folder?" Is there some esoteric reason why this would be difficult? I know I can easily create rules around specific dates, but that isn't the same thing by a long shot; am I missing something? In Outlook 2010 I can create search folders that do sort of this type of thing, but you can't create rules around a search folder... seems like either I am missing something major, or this is terribly short-sided.

    Read the article

  • Cannot delete folder - Content seems to be nested recursively

    - by RikuXan
    I cannot delete a folder located on my hard disk by any means. I don't quite know how it was created, all I know is, that it is a pretty deep structure of folders (too deep to delete it at once, since Windows restriction path name too long), but the problem in the end is, that I can't "pull out" the inner folders, because they don't seem to be folders anymore (Context menu lacks things like "Properties", "Cut", "Copy", "Delete" etc.) Here a picture of how a right click looks like on one of these "folders": As you can see, the current folder is in very deep, but that is not the problem, rather the one I left-clicked on. Has anyone any advice on how to get rid of these? I tried a chkdsk, said no errors. I also tried deleting those folder via a VMWare Ubuntu, to no success. I also tried a batch file from a volunteer at MS boards, that should automatically de-nest such folders, but I guess mine is a special case, since the tool only created more such folders.

    Read the article

  • DVI monitor detected only on computer startup

    - by kamil
    I've recently connected a new monitor, LG M2252D-PZ, to a rather outdated computer with Windows XP and Radeon 9600. XP has SP3 installed, video drivers are the latest version back from the times the video card was still supported. My problem is that the monitor works fine only as long as I don't turn it off or switch it to a different input. When I turn it back on, it says "no signal". The key to the problem must be the DVI port, to which the new monitor is connected. The previous monitor was connected to the VGA output, and I've tested that the new one also works fine when connected to the analogue port. Apparently, the computer tests for the presence of a monitor on the DVI port only on startup. The question is, how do I change this?

    Read the article

  • grep/search for multiple lines in a file.

    - by GSto
    Let's say I have a file with a long nested array, that's formatted like this: array( 'key1' => array( 'val1' => 'val', 'val2' => 'val', 'val3' => 'val', ), 'key2' => array( 'val1' => 'val', 'val2' => 'val', 'val3' => 'val', ), //etc... ); what I would like to do is have a way to grep/search a file, and by knowing key 1, get all the lines (the sub-array) it contains. is this possible?

    Read the article

  • Continue process after closing terminal?

    - by Jakobud
    Recently, I tried to unzip a 30 gig zip file on a remote system using Putty. As the long unzipping process continued, I closed Putty, assuming that the process would just continue to run on the remote machine. When I came back later and logged back into the machine again, I realized that the process must have stopped only part way through when I closed Putty. I wasn't expecting that to happen. My question is, how do I prevent this problem? Can I somehow fire off a process in the background? Or should just setup a one time cronjob that will run the process for me?

    Read the article

  • Simple Central Storage for HA mail server

    - by jtnire
    Hi Everyone, I will have 2 Postfix servers. One will be a backup of the other. What is the easiest method to provide central storage to both of these boxes? My infrastructure is very simple: Just a lot of Xen hosts, so there is no SAN or anything. Each Xen host does have RAID1 though. I don't mind mounting NFS shares on each of those mail servers, as long as the NFS server wasn't a single point of failure. Is there such a thing as redundant NFS? Any help would be appreciated Thanks

    Read the article

  • Any reasonable UPS for a Desktop PC, just to shut it down?

    - by Michael Stum
    While I do have a surge protector to protect against overvoltage (hopefully), I have nothing against undervoltage. When a lightning storm hits, I had the lights flickering at some point. The PC continued to run, but it got me thinking of getting a UPS as a way to a) have a clean 120V/60Hz power source and b) have a way to shut down the PC in case something bad happens. I heard not all UPS' protect against power spikes, so I wonder if someone has a recommendation? It does not need to keep the PC on for a long time if the power goes out, it's good enough if it shuts down the PC after 5 minutes or so. There are 2 PCs connected. One is a Core i7-860 with a Radeon 5870 running Windows 7 Ultimate (so quite power hungry. It uses a 600W PSU but I have no measurements of the actual usage), the other one is a Windoes Home Server, running WHS/Windows Server 2003. Any recommendations in the low-price segment?

    Read the article

  • Kernel Compiling from Vanilla to several machines

    - by Linux Pwns Mac
    When compiling kernels for machines is there a safe or correct way to create a template for say servers? I work with a lot of RHEL servers and want to compile them with GRSEC. However, I do not wish to always rebuild off of the .config for each machine and go in and remove a bunch of unrelated modules like wireless, bluetooth, ect... which you typically do not need in servers. I want to create a template .config that can be used on any machine, but is there a safe way to do that when hardware changes? I know with Linux, at least from my experience, you can cross jump hardware way easier then Windows/OSX. I assume that as long as I leave MOST of all the main hardware modules/CPU in that this could create a .config that would work for all or just about any machine?

    Read the article

  • Converting massive images to PDF, without crashing applications

    - by BloodyIron
    I'm trying to work with a large-format scanner, and we are scanning very long documents. Example, one of our documents we cut into two pieces, and one of those pieces is 3633x82486 in resolution. My application, Scanning Master 21+, which comes with the device (Graphtec CSX300-09) can output PDF, however when I try to save to PDF it complains about file being too large. I can successfully output to BMP however. GIMP can even open this BMP, after taking a while to load it. The resulting files range from 200MB - 1.2GB in size. Acrobat refuses to open the BMP format, saying it isn't supported or is damaged (which I know is not true). As I mentioned, the PDF plugin for GIMP crashes when I try to export to PDF. I'm really not sure what is the best tool for this job. So what is the best tool to produce PDF documents of very large images?

    Read the article

  • What is stable as Ubuntu in kernel space, but also cutting-edge as Fedora in user land?

    - by HRJ
    I am a long time Fedora user (since Fedora 6). Previously I have used Gentoo (for 2 years) and Slackware (for 5 years). The thing I liked about Fedora is frequency of package updates + great community. But lately I have noticed that Fedora is becoming too cutting-edge, nay, bleeding-edge. They changed the DNS client to be strict, without any warning, which broke some of their own packages for two Fedora releases. More critically, their LVM modules are not compatible across Fedora 12, 13, 14 (sometimes). Ubuntu is nicely polished but seems too stable for my liking. Some of the user-space applications are two major version numbers behind (even in testing or unstable or whatever they call it). Is there any Linux distro that has the stability of Ubuntu in kernel space and the bleeding edge in applications (especially harmless applications like, say, Stellarium)?

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

  • Change filtering method used by Firefox when zooming

    - by peak
    I often zoom in a step or two when reading long texts in Firefox, but when I do so the images become super blurry. It's not really a big deal but when reading text on images (mathematical equations mostly), it's a bit distracting. It seems as if they are scaled using only bilinear interpolation. If I scale an image the same amount in for example Paint.NET or Photoshop the result is much better. Is there any way to change the filtering method used by Firefox to bicubic or another better method? I am Using Firefox 3.5 on Windows BTW.

    Read the article

  • How can I stop Pages from bolding/italicizing ordered list item numbers/letters?

    - by donut
    I'm writing a long, technical document that uses a lot of ordered lists in Pages '09. Often when I bold or italicize the first bit of text I type for a list item it Pages applies the same styling to the letter or number of that list item. It's been doing this for the first 13 pages I've written and now I've found out that I need to undo that. Is there any place to change this behavior or easily fix existing list item numbers/letters that are styled? The only way I've found is to select the whole list item (all of its content) and removing teh styling (un-bolding and un-italicizing) and then going back in and re-applying my stylings to its content. Definitely not ideal.

    Read the article

  • VMWare Player disrupting Samba on Ubuntu 9.04 host?

    - by TonyUser
    I've had Ubuntu 9.04 running on this computer for a while now. I just installed VMWare Player on my system and now when I try to browse the network in the Ubuntu host all the computers will show up one time only. If I try to browse again, no computers appear and even the workgroup is missing. I have to run: sudo /etc/init.d/vmware stop sudo /etc/init.d/samba restart to get the network visable again. As long as VMWare Player is stopped Samba seems fine. I guess it's got to do with the virtual adaptors that VMWare installs. Is there a way to get them both to work, or did I just miss-configure something?

    Read the article

  • Suhosin per-URL exceptions?

    - by STATUS_ACCESS_DENIED
    I am using SimpleID as my OpenID provider and it turns out that if I log on via pages like those on StackExchange, one of the parameters of the GET request gets dropped by Suhosin. The name of the variable is s and I presume it's responsible for the "return to URL" part after login. All of this is not a problem as long as I am already logged into SimpleID from before. However, as soon as the site on which I want to log in via OpenID ends up at the login screen of SimpleID, the redirect back to the site I came from does not work anymore due to the dropped variable. Is there a method to configure either on a per-virtual-host or per-URL basis to ignore the maximum length for GET requests with a parameter s exceeding the (globally) set limit? I'm using Apache 2.2, so I was wondering whether a mechanism similar to setting the PHP ini variables from within the server configuration exists for Suhosin.

    Read the article

  • Pc not booting, no video output, hard drive noises

    - by bullettime
    My computer suddenly stopped booting up. I just came back from a week long trip, and it was working fine before I left. When I power up the computer now, it seems to power up just fine (leds on, fans working), but then there's no output on the screen at all, and there's a noise that seems to be coming from the HD, like it is trying to read something. After the noise starts, it does 6 cycles and then stops. I'm not sure but I don't think it did this noise when the pc was still working. I tried using the onboard video card instead but it made no difference. What could be wrong with my computer?

    Read the article

  • My Domain Is Getting Blocked As Gambling

    - by Tim Scott
    I have a site at http://slotted.co. Some of my would-be users complain that their company firewall blocking access. At least one user told me it was flagged as a gambling domain, which it is not. What can I do about this? Incidentally I own some other domains, such as signupster.com, which redirect to my site. I wonder if a quick workaround would be to make that my main domain and have slotted.co redirect? Obviously I prefer in the long terms that slotted.co is considered clean.

    Read the article

  • How do i keep a newly started program from taking focus?

    - by Jugglingnutcase
    Say i'm coding in emacs and want to start up a music program. Because it takes too long to start up i go back to coding and type away. When the music application starts up, the focus is stolen (gasp! stolen!) away from emacs and goes to the music application, often mid-thought. Is there any way to keep this from happening and have the newly started application not have focus until i see that it's up and ready to be used? Besides getting rid of my ADD of course. Or getting an impossibly fast computer that can keep up with my mind. i'm using a Windows XP system, but i will soon have a Windows 7 system, and i have Linux at home.

    Read the article

  • Recursive function with for loop python

    - by user134743
    I have a question that should not be too hard but it has been bugging me for a long time. I am trying to write a function that searches in a directory that has different folders for all files that have the extension jpg and which size is bigger than 0. It then should print the sum of the size of the files that are in these categories. What I am doing right now is def myFuntion(myPath, fileSize): for myfile in glob.glob(myPath): if os.path.isdir(myFile): myFunction(myFile, fileSize) if (fnmatch.fnmatch(myFile, '*.jpg')): if (os.path.getsize(myFile) > 1): fileSize = fileSize + os.path.getsize(myFile) print "totalSize: " + str(fileSize) THis is not giving me the right result. It sums the sizes of the files of one directory but it does not keep suming the rest. For example if I have these paths C:/trial/trial1/trial11/pic.jpg C:/trial/trial1/trial11/pic1.jpg C:/trial/trial1/trial11/pic2.jpg and C:/trial/trial2/trial11/pic.jpg C:/trial/trial2/trial11/pic1.jpg C:/trial/trial2/trial11/pic2.jpg I will get the sum of the first three and the the size of the last 3 but I won´t get the size of the 6 together, if that makes sense. Thank you so much for your help!

    Read the article

  • Is it preferable to use POP or IMAP to check multiple google mail accounts from one?

    - by Adam Tuttle
    I have email accounts on several domains that use Google Mail, and thus far I've only ever used POP to send and receive mail from a single inbox. This is quite functional, and as long as I remember to select the appropriate FROM address when starting a new thread, mostly works without any additional thought: messages received on account A are replied to via account A, and so on. My only complaint is the lag -- I've seen sometimes as much as a half hour between messages arriving in an inbox before being imported into my primary inbox via POP. My question is this: Google also supports IMAP. Would that be in any way preferable over POP access? Reduced lag would be nice, but not at a general speed cost, if everything I do has to check another mailbox too.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >