Search Results

Search found 39473 results on 1579 pages for 'johny why'.

Page 208/1579 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Why does calling IEnumerable<string>.Count() create an additional assembly dependency ?

    - by Gishu
    Assume this chain of dll references Tests.dll >> Automation.dll >> White.Core.dll with the following line of code in Tests.dll, where everything builds result.MissingPaths Now when I change this to result.MissingPaths.Count() I get the following build error for Tests.dll "White.UIItem is not defined in an assembly that is not referenced. You must add a reference to White.Core.dll." And I don't want to do that because it breaks my layering. Here is the type definition for result, which is in Automation.dll public class HasResult { public HasResult(IEnumerable<string> missingPaths ) { MissingPaths = missingPaths; } public IEnumerable<string> MissingPaths { get; set; } public bool AllExist { get { return !MissingPaths.Any(); } } } Down the call chain the input param to this ctor is created via (The TreeNode class is in White.Core.dll) assetPaths.Where(assetPath => !FindTreeNodeUsingCache(treeHandle, assetPath)); Why does this dependency leak when calling Count() on IEnumerable ? I then suspected that lazy evaluation was causing this (for some reason) - so I slotted in an ToArray() in the above line but didn't work. Update 2011 01 07: Curiouser and Curiouser! it won't build until I add a White.Core reference. So I add a reference and build it (in order to find the elusive dependency source). Open it up in Reflector and the only references listed are Automation, mscorlib, System.core and NUnit. So the compiler threw away the White reference as it was not needed. ILDASM also confirms that there is no White AssemblyRef entry. Any ideas on how to get to the bottom of this thing (primarily for 'now I wanna know why' reasons)? What are the chances that this is an VS2010/MSBuild bug? Update 2011 01 07 #2 As per Shimmy's suggestion, tried calling the method explcitly as an extension method Enumerable.Count(result.MissingPaths) and it stops cribbing (not sure why). However I moved some code around after that and now I'm getting the same issue at a different location using IEnumerable - this time reading and filtering lines out of a file on disk (totally unrelated to White). Seems like it's a 'symptom-fix'. var lines = File.ReadLines(aFilePath).ToArray(); once again, if I remove the ToArray() it compiles again - it seems that any method that causes the enumerable to be evaluated (ToArray, Count, ToList, etc.) causes this. Let me try and get a working tiny-app to demo this issue... Update 2011 01 07 #3 Phew! More information.. It turns out the problem is just in one source file - this file is LINQ-phobic. Any call to an Enumerable extension method has to be explicitly called out. The refactorings that I did caused a new method to be moved into this source file, which had some LINQ :) Still no clue as to why this class dislikes LINQ. using System; using System.Collections.Generic; using System.IO; using System.Linq; using G.S.OurAutomation.Constants; using G.S.OurAutomation.Framework; using NUnit.Framework; namespace G.S.AcceptanceTests { public abstract class ConfigureThingBase : OurTestFixture { .... private static IEnumerable<string> GetExpectedThingsFor(string param) { // even this won't compile - although it compiles fine in an adjoining source file in the same assembly //IEnumerable<string> s = new string[0]; //Console.WriteLine(s.Count()); // this is the line that is now causing a build failure // var expectedInfo = File.ReadLines(someCsvFilePath)) // .Where(line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)) // .Select(line => line.Replace("%PLACEHOLDER%", param)) // .ToArray(); // Unrolling the LINQ above removes the build error var expectedInfo = Enumerable.ToArray( Enumerable.Select( Enumerable.Where( File.ReadLines(someCsvFilePath)), line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)), line => line.Replace("%PLACEHOLDER%", param)));

    Read the article

  • Why does sharepoint claim not enougth disk space for backup when there is lots availalbe?

    - by Mr Shoubs
    I'm trying to run the following command: Backup-SPFarm -Directory E:\Backups -BackupMethod full -Verbose However it errors saying there isn't enough disk space... the backup will be about 1.8Gb in size, I have 27.52GB free, so why does it think I need 30Gb? VERBOSE: Leaving BeginProcessing Method of Backup-SPFarm. VERBOSE: Performing operation "Backup-SPFarm" on Target "SHAREPOINTSERV". Backup-SPFarm : There is not enough disk space. Free additional space on your h ard disk and then try again. Approximate amount of space needed: 30.12 GB. Amou nt of space free on disk: 27.52 GB. At E:\Backups\Script\BackupSharePointFarm.ps1:3 char:14 + Backup-SPFarm <<<< -Directory E:\Backups -BackupMethod full -Verbose + CategoryInfo : InvalidData: (Microsoft.Share...mdletBackupFarm: SPCmdletBackupFarm) [Backup-SPFarm], SPException + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletBackupFa rm VERBOSE: Leaving ProcessRecord Method of Backup-SPFarm. VERBOSE: Leaving EndProcessing Method of Backup-SPFarm.

    Read the article

  • Why does Windows 7 need hardware virtualization to run XP mode?

    - by Ken Pespisa
    I have a MacBook Pro and I've run VMware Fusion's unity mode and Parallels' cohesion mode along side the Mac OS X, and both work pretty seamlessly. I figured XP Mode in Windows 7 would be something similar, but I then learned my machine requires hardware virtualization support, which it does not have. My machine is an HP dc7800. That's a dual core 2.2GHz machine with 4GBs of RAM. Certainly it has the horsepower to run a virtual environment alongside the primary OS. I'm wondering: 1) Why Microsoft decided to make hardware virtualization a requirement and 2) What am I missing? Is the experience similar to Parallel's cohesion mode / Fusion's unity mode? Thanks!

    Read the article

  • Why is memory management so visible in Java VM?

    - by Emil
    I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space. Why is this such an issue for Java VMs compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined. I'd also be interested in references to technical papers/blog-posts/etc explaining design choices behind JVM GC implementations, across different JVMs or compared to other interpreted language VMs (e.g. comparing Sun or IBM JVM to Parrot). Are there technical reasons why JVM users still have to deal with non-auto-tuning heap/permgen sizes?

    Read the article

  • Why does waking a PC up with a timer act differently than with the power button?

    - by Dan Rasmussen
    I have a Windows 7 machine set up as a server. It has no monitor and is only accessed through remote desktop. I set up two scheduled tasks, one to put the computer to sleep at night and another to wake it up in the morning. When it's woken up from sleep via a timer, it stays awake for only a couple minutes before going back to sleep. When woken up by pushing the power button, however, it stays awake all the way until the sleep timer. Why does my PC behave differently in these two scenarios? I have set the PC not to prompt for a user's password on wake, since I worried that the login screen might follow different power rules. I tried SmartPower Configuration but had the same problems. I can provide more details if questions are asked in the comments, but I'm not sure what's relevant.

    Read the article

  • Why is my Wifi connection slower than ethernet even though bandwidth should saturated?

    - by supercheetah
    I'm wondering why it is that my wireless connection is slower than my wired connection for things going to the outside world (so, not files being transferred within the network), which is should be faster than the outside connection, which, I would think, would mean that downloading something like an ISO or other large file from the Internet should be the same either way since that should saturate the connection anyway. Does it have something to do with the encryption (WPA)? Could it have something to do with MTU since the MTU for ethernet can be in the range of 1500 to 9000 bytes, and 2304 bytes for 802.11? Do wireless packets have to be buffered, whereas this wouldn't be an issue with ethernet? What's the math behind the difference?

    Read the article

  • JW Player can not play m3u8 stream?

    - by why
    with check this example, http://developer.longtailvideo.com/player/branches/adaptive/test/provider.html , I tried the example myself, There is my code: <html> <head> <script type="text/javascript" src="jwplayer.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <title>Provider tests</title> <style> body { padding: 50px; font: 13px/20px Arial; background: #EEE; } form { margin-top: 20px; } #player { -webkit-box-shadow: 0 0 5px #999; background: #000; } ul { margin-top: 40px; padding: 0 0 0 20px; list-style-type: square; } </style> </head> <body> Test M3U8 <div id="player">You need Flash to play these tests</div> <script type="text/javascript"> jwplayer("player").setup({ file: '../m3u8/index.m3u8', flashplayer: 'player.swf', provider:'adaptiveProvider.swf', height: 360, width: 640 }); function loadStream(url) { jwplayer("player").load({file: url,provider: 'adaptiveProvider.swf'}); jwplayer("player").play(); return false; } $(document).ready(function() { loadStream('http://localhost/m3u8/index.m3u8'); }); </script> <ul id="streamlist"></ul> <div id="panel"></div> </body> </html> But the Jw Play can not work BTW: my vlc can play http://localhost/m3u8/index.m3u8 well

    Read the article

  • Obj-C Error: Expected expression before ...... (why?)

    - by Horatiu Paraschiv
    Hi I have an enum declared like this: typedef enum { Top, Bottom, Center } UIItemAlignment; In my code I try to use it like this: item.alignment = UIItemAlignment.Top; I get an error like this: " Expected expression before 'UIItemAlignment' " If I use only: item.alignment = Top; everything works fine but why do I get this error if I try to use it the other way? _alignment is an NSInteger and it has a property declared like this @property (readwrite) NSInteger alignment; and I synthesized it in my implementation file. So my question is, why do I get this error?

    Read the article

  • Why are network printers not available in the Add Printer Wizard...when run over a network?

    - by Kev
    From a Windows 2003 server machine I browsed the network to an XP client (\computername in Explorer) then double-clicked Printers and Faxes and then Add Printer. In the wizard, normally the second screen asks if you want to install a local printer or a network printer. Well, in this case, it seems to assume I want a local printer, because the second screen is what would normally be the third screen if you chose local printer and clicked Next. I want to install a network printer on a remote machine for its local users. Is this not possible? If not, why not?

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Why is my NTP controlled computer clock two minutes ahead?

    - by Martin Liversage
    The clock in my computer is configured to be synchronized using NTP. To verify this I have tried two NTP clients using various NTP servers. My computer and the NTP clients are in complete agreement about the current time even across a wide range of NTP servers. I also have a GPS and my national phone company provides an accurate clock available by calling a specific phone number. Both my GPS and the phone company agrees on the current time. However, my computer is almost precisely two minutes (or 1 minute and 59 seconds) ahead of what I believe to be the "real" current time where I live. Why is my computer two minutes ahead? I realize that synchronizing clocks using the internet may not be entirely accurate as there is latency, but two minutes is a very long time on the internet. Is NTP really two minutes ahead? I'm running Windows 7 and live in the time zone UTC+1, but I don't think that is important in understanding my problem.

    Read the article

  • Why is mcrypt not included in most Linux distributions?

    - by Daniel Lopez
    libmcrypt is a powerful encryption library that is very popular with PHP-based applications. However, most Linux distributions do not include it. This causes problems for many users that need to download and compile it separately. I am guessing that the reason it is not shipped is related to encryption or patent issues. However, the source code for library itself is hosted and available on sourceforge.net I have been searching unsuccessfully for a document of authoritative post that explains the exact issues why this extension is not bundled with mainstream distributions. Can anyone provide a pointer to such material or provide an explanation?

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • Why do my download speeds drastically vary during a download?

    - by J. Anthony Carter
    I watch the download speed rise and fall like waves in a storm. At night, during low bandwidth usage I have achieve speeds as high as 3.23 M/sec but the watch them decline to 250 K/sec. and then climb back up. Over and over. During the day my best is around 1.67 M/sec with lows into the 65 K/sec. On top of this, why does a download need to slow down when approaching the end of the download? It's not like a multi-hundred ton train needing to decrease speed as it approaches the station.

    Read the article

  • Why use FQDN as DNS-server option in DHCP?

    - by Filip Haglund
    I've seen multiple default configurations of DHCP-servers with a FQDN set as the DNS-server option. Doesn't this imply a catch-22, or the need for that DNS-server to be in the hosts file of every single client? example from dhcp3-server in debian 6: option domain-name-servers ns1.internal.example.org; I can see how using a dns name is convenient because it's only an A-record to change, and they can be load balanced if wanted, but I don't see how the client is going to resolve the name. Why are people using FQDN's as DNS-server addresses in DHCP?

    Read the article

  • Why ping another innet machine from MacBook get netgate's ip address?

    - by Xinwang
    I have three machine in my home network connected by a wireless router. One is server installed linux at 192.168.1.1, another is Thinkpad with MS Windows XP at 192.168.1.2, last one is MacBook Pro with Mac OS X 10.6.3 at 192.168.1.3. When I ping the Linux Server from Thinkpad (MS Windows XP) I can get the correct ip address, but when I ping it from Mac I get the global address of my router, like 61.135.181.175. Could you tell me why this happen? And how do I get same ping result on Mac and Windows. Thanks

    Read the article

  • Why is my vhosts file interfering with my apache deployment?

    - by Avery Chan
    When I enable my vhosts file (i.e. uncomment this line: Include /private/etc/apache2/extra/httpd-vhosts.conf) I am unable to reach localhost. I /am/ able to reach the last virtual host listed in my vhosts file: <VirtualHost *:80> DocumentRoot "/Users/achan/Sites/epwbst" ServerName epwbst </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/achan/Sites/pxproj" ServerName pxproj </VirtualHost> Typing pxproj in my browser brings up the expected web content. But I am unable to reach epwbst or localhost. If I re-comment the vhost line in my httpd.conf, I am able to reach local host (i.e. "It works!") but obviously am unable to reach my virtual hosts. I don't know how to continue troubleshooting this. Why can't I reach localhost when I've got my vhosts turned on? OS: Mac OS X 10.7 Server version: Apache/2.2.21 (Unix)

    Read the article

  • Why is it a bad idea to use multiple NAT layers or is it?

    - by iamrohitbanga
    The computer network of an organization has a NAT with 192.168/16 IP address range. There is a department with a server that has an IP address 192.168.x.y and this server handles hosts of this department with another NAT with the IP address range 172.16/16. Thus there are 2 layers of NAT. Why don't they have subnetting instead. This would allow easy routing. I feel multiple layers of NAT can cause performance losses. Could you please help me compare the two design strategies.

    Read the article

  • Why does explorer restart automatically when I kill it with Process.Kill?

    - by Thomas Levesque
    If I kill explorer.exe like this: private static void KillExplorer() { var processes = Process.GetProcessesByName("explorer"); Console.Write("Killing Explorer... "); foreach (var process in processes) { process.Kill(); process.WaitForExit(); } Console.WriteLine("Done"); } It restarts immediately. But if I use taskkill /F /IM explorer.exe, or kill it from the task manager, it doesn't restart. Why is that? What's the difference? How can I close explorer.exe from code without restarting it? Sure, I could call taskkill from my code, but I was hoping for a cleaner solution...

    Read the article

  • Symfony 1.3 and forms: the password changes when i click on 'Save', why??

    - by user248959
    Hi, i have installed sfDoctrineGuardUser and have created this model that inherits sfGuardUser model: Usuario: inheritance: extends: sfGuardUser type: simple columns: nombre_apellidos: string(60) sexo: boolean fecha_nac: date provincia: string(60) localidad: string(255) email_address: string(255) avatar: string(255) avatar_mensajes: string(255) I have also created a module called 'miembros' based on that model. Well, I log normally through sfGuardAuth/signin, then i go to "miembros/edit/id/$id_of_the_member_i_used_to_log_in" and push 'Save' button. Then i logout. If i try to log in again, it says: "The username and/or password is invalid". Later, i have realized that when click 'Save' the value of the field 'password' changes (well its encrypted version). So that is the reason why i can not then log in. But, why the value of the password change when i click on 'Save' ??? Regards Javi

    Read the article

  • Why does Excel expose an 'Evaluate' method at all?

    - by jtolle
    A few questions have come up recently involving the Application.Evaluate method callable from Excel VBA. The old XLM macro language also exposes an EVALUATE() function. Both can be quite useful. Does anyone know why the general expression evaluator is exposed, though? My own hunch is that Excel needed to give people a way to get ranges from string addresses, and to get the value of named formulas, and just opening a portal to the expression evaluator was the easiest way. But of course you don't need the ability to evaluate arbitrary expressions just to do that. Application.Evaluate seems kind of...unfinished. It isn't very well documented, and there are quite a few quirks and limitations (as described by Charles Williams here: http://www.decisionmodels.com/calcsecretsh.htm) with what is exposed. I suppose the answer could be simply "why not expose it?", but I'd be interested to know what design decisions led to this feature. Failing that, I'd be interested to hear other hunches.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >