Search Results

Search found 30 results on 2 pages for 'rwmnau'.

Page 1/2 | 1 2  | Next Page >

  • Hyper-V performance comparisons vs physical client?

    - by rwmnau
    Are there any comparisons between Hyper-V client machines and their physical equivalent? I've looked around and can find 4000 articles about improving Hyper-V performance, but I can't find any that actually do a side-by-side comparison or give benchmarking numbers. Ideally, I'm interested in a comparison of CPU, memory, disk, and graphics performance between something like the following: Some powerful workstation (with plenty of RAM) with Windows 7 installed on it directly Same exact worksation with Hyper-V Server 2008 R2 (the bare Server role) and a full-screen Windows 7 client machine Virtual Server 2005 had performance that didn't compare at all with actual hardware, but with the advances in CPU and hardware-level virtualization, has performance improved significantly? How obvious would it be to a user of the two above scenarios that one of them was virtualized, and does anybody know of actual benchmarking of this type?

    Read the article

  • View another persons calendar details in Outlook 2010

    - by rwmnau
    I know how to view somebody else's calendar - there are 100 walk-throughs like this one on Google. However, this feature has changed in Outlook 2010, and you no longer get prompted for rights to view another person's calendar, and Outlook just displays their "Free/Busy" information, which doesn't help me. I'd like to request permissions to view the details of their appointments, but I can't find any place to request permissions on their calendar - Outlook 2010 just gives me "Free/Busy" rights and then appears to have no option to request additional rights. Can anybody point me in the right direction?

    Read the article

  • Proper Outlook Free/Busy status when working from home

    - by rwmnau
    Our office (pretty large - about 200 people) has recently started part-time telecommuting. It's only one day/week now, but it's already raised some questions about availability, so I wanted to see how the users here, some of whom I'm sure telecommute to a corporate job, how they set their out of office status. Outlook has four statuses, and here's what I (and most others?) take them to mean: Free: I'm available for meetings Busy: I'm in a meeting or otherwise occupied, and unavailable Tentative: Shy away from scheduling over, but I'm available if needed Out of office: I'm on vacation and unavailable. However, I don't travel for work - do people tend to use this status to mean they're remote, but available for a phone call/bridge? As we begin to telecommute, I'll be available by phone for meetings, but not in person - any meeting can have a conference bridge, but some meetings just need to be in person. I'd like to send the right message about my status - people can schedule meetings with me on my telecommute days, but they should expect me to be on a conference bridge when they do. What status do people use? Does "Out of Office" correctly reflect that you're working from home, even though I perceive this to mean that somebody is on vacation? Maybe I'm the only one confused here, but as a company that's never before done telecommuting of any kind, I'm in the dark about standard practices. Thanks for the insight! Though this isn't a technical question directly, I'm hoping it's still applicable to the group and constructive - if it's not, please close it and accept my apology.

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • Transfer file using BITS without using IIS on one end?

    - by rwmnau
    I know that I can transfer files using BITS and a wrapper like SharpBITS, but it seems that I need an IIS server on one end - either to upload to or download from. Is there a way to use the BITS protocol to transfer a file without requiring IIS? Some kind of a "BITS Server" or "Listener" project that my client's BITS service can connect to. I'm looking for functionality that's exactly what BITS provides, but I'd prefer not to require that IIS be installed (though if I have to, I can). Thanks!

    Read the article

  • Tracking unique versions of files with hashes

    - by rwmnau
    I'm going to be tracking different versions of potentially millions of different files, and my intent is to hash them to determine I've already seen that particular version of the file. Currently, I'm only using MD5 (the product is still in development, so it's never dealt with millions of files yet), which is clearly not long enough to avoid collisions. However, here's my question - Am I more likely to avoid collisions if I hash the file using two different methods and store both hashes (say, SHA1 and MD5), or if I pick a single, longer hash (like SHA256) and rely on that alone? I know option 1 has 288 hash bits and option 2 has only 256, but assume my two choices are the same total hash length. Since I'm dealing with potentially millions of files (and multiple versions of those files over time), I'd like to do what I can to avoid collisions. However, CPU time isn't (completely) free, so I'm interested in how the community feels about the tradeoff - is adding more bits to my hash proportionally more expensive to compute, and are there any advantages to multiple different hashes as opposed to a single, longer hash, given an equal number of bits in both solutions?

    Read the article

  • Transfer file using BITS without using IIS as the server?

    - by rwmnau
    I know that I can transfer files using BITS and a wrapper like SharpBITS, but it seems that I need an IIS server on one end - either to upload to or download from. Is there a way to use the BITS protocol to transfer a file without requiring IIS? Some kind of a "BITS Server" or "Listener" project that my client's BITS service can connect to. I'm looking for functionality that's exactly what BITS provides, but I'd prefer not to require that IIS be installed (though if I have to, I can). Thanks!

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

  • Are hash collisions with different file sizes just as likely as same file size?

    - by rwmnau
    I'm hashing a large number of files, and to avoid hash collisions, I'm also storing a file's original size - that way, even if there's a hash collision, it's extremely unlikely that the file sizes will also be identical. Is this sound (a hash collision is equally likely to be of any size), or do I need another piece of information (if a collision is more likely to also be the same length as the original). Or, more generally: Is every file just as likely to produce a particular hash, regardless of original file size?

    Read the article

  • Me.Invoke in VB.NET doesn't actually "Invoke" - threads stall on Invoke statement

    - by rwmnau
    I've got the following code: Public Delegate Sub SetStatusBarTextDelegate(ByVal StatusText As String) Private Sub SetStatusBarText(ByVal StatusText As String) If Me.InvokeRequired Then Me.Invoke(New SetStatusBarTextDelegate(AddressOf SetStatusBarText), StatusText) Else Me.labelScanningProgress.Text = StatusText End If End Sub The problem is that, when I call the "SetStatusBarText" sub from another thread, InvokeRequired is True (as it should be), but then my threads stall on the Me.Invoke statement - pausing execution shows them all just sitting there, not actually invoking anything. Any thoughts about why the threads seem to be afraid of the Invoke?

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • See if any application has a DLL from the GAC loaded

    - by rwmnau
    I'm trying to deploy new copies of my DLL to the GAC on remote servers, but I need to identify if any processes currently running have a loaded copy of the DLL I'm replacing - I'd like to restart them, or at least tell the user. For example, Biztalk seems to load the DLLs it needs the first time they're used, and then replacing them keeps the old copy in memory until the Host Instances are restarted - something I could easily do as part of my deployment. Is there a way to tell using .NET which processes have loaded a particular DLL from the GAC? UPDATE: Some further investigation shows that both Process Explorer has this functionality, and another Sysinternals tool, ListDLL, does exactly what I want to be able to do. I'd like to know how they do it, since I'd love to replicate this functionality in my application without having to include and screen-scrape ListDLL (if that's even allowed inside the license).

    Read the article

  • Are has collisions with different file sizes just as likely as same file size?

    - by rwmnau
    I'm hashing a large number of files, and to avoid hash collisions, I'm also storing a file's original size - that way, even if there's a hash collision, it's extrememly unlikely that the file sizes will also be identical. Is this sound (a hash collision is equally likely to be of any size), or do I need another piece of information (if a collision is more likely to also be the same length as the original). Or, more generally: Is every file just as likely to produce a particular hash, regardless of original file size?

    Read the article

  • Can I use VS2010's Intellitrace to gather data for a Windows Service?

    - by rwmnau
    I have a Windows service that I'd like to gather some debugging data on using Intellitrace - the problem is that you can't debug a Windows Service by starting it directly from inside VS. I have the service installed, and the very first statement in Service.Start is "Debug.Break", which allows me to attach VS. However, you can't use Intellitrace if a process is already started when you attach. Does anybody know of a workaround for this?

    Read the article

  • Why use SyncLocks in .NET for simple operations when Interlocked class is available?

    - by rwmnau
    I've been doing simple multi-threading in VB.NET for a while, and have just gotten into my first large multi-threaded project. I've always done everything using the Synclock statement because I didn't think there was a better way. I just learned about the Interlocked Class - it makes it look as though all this: Private SomeInt as Integer Private SomeInt_LockObject as New Object Public Sub IntrementSomeInt Synclock SomeInt_LockObject SomeInt += 1 End Synclock End Sub Can be replaced with a single statement: Interlocked.Increment(SomeInt) This handles all the locking internally and modifies the number. This would be much simpler than writing my own locks for simple operations (longer-running or more complicated operations obviously still need their own locking). Is there a reason why I'd rolling my own locking, using dedicated locking objects, when I can accomplish the same thing using the Interlocked methods?

    Read the article

  • What permissions needed to connect to SQL Server Integration Services

    - by rwmnau
    I need to allow a consultant to connect to SSIS on a SQL Server 2008 box without making him a local administrator. If I add him to the local administrators group, he can connect to SSIS just fine, but it seems that I can't grant him enough permissions through SQL Server to give him these rights without being a local admin. I've added him to every role on the server, every database role in MSDB shy of DBO, and he's still not able to connect. I don't see any SSIS-related Windows groups on the server - Is membership in the Local Administrators group really required to connect to the SSIS instance on a SQL Server? It seems like there is somewhere I should be able to grant "SSIS Admin" rights to a user (even if it's a Windows account and not a SQL account), but I can't find that place. UPDATE: I've found an MSDN article (See the section titled "Eliminating the 'Access if Denied' Error") that describes how to resolve problem, but even after following the stepsI'm still not able to connect. Just wanted to add it to the discussion

    Read the article

  • What to name column in database table that holds versioning number

    - by rwmnau
    I'm trying to figure out what to call the column in my database table that holds an INT to specific "record version". I'm currently using "RecordOrder", but I don't like that, because people think higher=newer, but the way I'm using it, lower=newer (with "1" being the current record, "2" being the second most current, "3" older still, and so on). I've considered "RecordVersion", but I'm afraid that would have the same problem. Any other suggestions? "RecordAge"? I'm doing this because when I insert into the table, instead of having to find out what version is next, then run the risk of having that number stolen from me before I write, I just insert insert with a "RecordOrder" of 0. There's a trigger on the table AFTER INSERT that increments all the "RecordOrder" numbers for that key by 1, so the record I just inserted becomes "1", and all others are increased by 1. That way, you can get a person's current record by selection RecordOrder=1, instead of getting the MAX(RecordOrder) and then selecting that. PS - I'm also open to criticism about why this is a terrible idea and I should be incrementing this index instead. This just seemed to make lookups much easier, but if it's a bad idea, please enlighten me! Some details about the data, as an example: I have the following database table: CREATE TABLE AmountDue ( CustomerNumber INT, AmountDue DECIMAL(14,2), RecordOrder SMALLINT, RecordCreated DATETIME ) A subset of my data looks like this: CustomerNumber Amountdue RecordOrder RecordCreated 100 0 1 2009-12-19 05:10:10.123 100 10.05 2 2009-12-15 06:12:10.123 100 100.00 3 2009-12-14 14:19:10.123 101 5.00 1 2009-11-14 05:16:10.123 In this example, there are three rows for customer 100 - they owed $100, then $10.05, and now they owe nothing. Let me know if I need to clarify it some more. UPDATE: The "RecordOrder" and "RecordCreated" columns are not available to the user - they're only there for internal use, and to help figure out which is the current customer record. Also, I could use it to return an appropriately-ordered customer history, though I could just as easily do that with the date. I can accomplish the same thing as an incrementing "Record Version" with just the RecordCreated date, I suppose, but that removes the convenience of knowing that RecordOrder=1 is the current record, and I'm back to doing a sub-query with MAX or MIN on the DateTime to determine the most recent record.

    Read the article

  • Visual Studio detaches from application as soon as debugging starts

    - by rwmnau
    I have a web application that I've always been able to run in Visual Studio and it debugs just fine (breakpoints work, I can pause execution, etc). Recently, the behavior changed suddenly, and a few things happen: I start debugging, it lauches IE and loads the application, but after a few seconds (sometimes the page hasn't even displayed yet), Visual Studio acts as if debugging has stopped - I'm able to edit code in VS again, and the "Play" button on the toolbar is enabled. The application continues to run in the IE window just spawned, but I'm not attached to it During this few seconds that VS is "debugging", because it detaches, my breakpoints show as hollow - as if I'm set to "Release" mode and they won't be hit. In fact, I have a breakpoint set in Page_Load, and it skips right by. I've checked, and I'm set to debug mode, though the compile mode dropdown is missing from my toolbar (I checked in the build properties to ensure I was in debug mode). Can anybody shed some light here?

    Read the article

  • Why do I need to explicitly specify all columns in a SQL "GROUP BY" clause - why not "GROUP BY *"?

    - by rwmnau
    This has always bothered me - why does the GROUP BY clause in a SQL statement require that I include all non-aggregate columns? These columns should be included by default - a kind of "GROUP BY *" - since I can't even run the query unless they're all included. Every column has to either be an aggregate or be specified in the "GROUP BY", but it seems like anything not aggregated should be automatically grouped. Maybe it's part of the ANSI-SQL standard, but even so, I don't understand why. Can somebody help me understand the need for this convention?

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

  • Check whether a folder is a local or a network resource in .NET

    - by rwmnau
    Is there a quick way to check whether a path I have is on a local disk or somewhere on the network? I can't just check to see if it's a drive letter vs. UNC, because that would incorrectly identify mapped drives as local. I assumed it would be a boolean in the DirectoryInfo object, but it appears that it's not. I've found classic VB code to do this check (through an API), but nothing for .NET so far.

    Read the article

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • Should I auto-increment the assembly version when I build my software?

    - by rwmnau
    In Visual Studio 2003, you could easily set your project assembly to auto-increment every time you built it, but with Visual Studio 2005, this functionality was removed. You can still auto-increment your assembly version on every build, but it's a complicated custom build step instead of an integrated feature. I'm not sure why this was removed, but here's a question I should have asked a while ago - Should I be using a workaround to continue to auto-increment when I build, or is there a good reason to stop doing this, in favor of manually incrementing? Since Microsoft removed it from VS, perhaps there's a good reason, and I'm wondering if anybody knows it.

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • Processing a database queue across multiple threads - design advice

    - by rwmnau
    I have a SQL Server table full of orders that my program needs to "follow up" on (call a webservice to see if something has been done with them). My application is multi-threaded, and could have instances running on multiple servers. Currently, every so often (on a Threading timer), the process selects 100 rows, at random (ORDER BY NEWID()), from the list of "unconfirmed" orders and checks them, marking off any that come back successfully. The problem is that there's a lot of overlap between the threads, and between the different processes, and their's no guarantee that a new order will get checked any time soon. Also, some orders will never be "confirmed" and are dead, which means that they get in the way of orders that need to be confirmed, slowing the process down if I keep selecting them over and over. What I'd prefer is that all outstanding orders get checked, systematically. I can think of two easy ways do this: The application fetches one order to check at a time, passing in the last order it checked as a parameter, and SQL Server hands back the next order that's unconfirmed. More database calls, but this ensures that every order is checked in a reasonable timeframe. However, different servers may re-check the same order in succession, needlessly. The SQL Server keeps track of the last order it asked a process to check up on, maybe in a table, and gives a unique order to every request, incrementing its counter. This involves storing the last order somewhere in SQL, which I wanted to avoid, but it also ensures that threads won't needlessly check the same orders at the same time Are there any other ideas I'm missing? Does this even make sense? Let me know if I need some clarification.

    Read the article

1 2  | Next Page >