Search Results

Search found 32343 results on 1294 pages for 'good practice'.

Page 314/1294 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Process a batch of items, return an object to report on status

    - by Naeem Sarfraz
    I'm looking for a pattern (or good practice) for the following scenario: My function List<BatchItemResponse> Process(List<BatchItem> Data) {..} will process a list of data, and return info on where each item in the batch could be processed. struct BatchItemResponse { int BatchItemID; bool Processed; string Description; } Any thoughts? Is what I've proposed as good as it gets?

    Read the article

  • Picture position in a desktop background slideshow

    - by user334017
    I'm using windows 7 and I have a slideshow for my desktop background. I have pictures that are all of varying sizes, some are smaller then my screen's resolution, some are the same, and some are much larger. To look good, smaller wallpapers need to have their picture position set to 'center' (where the picture is centered on the screen and the background color fills up the edges). And the larger backgrounds need to be set to 'fit' (where the picture is scaled down so the entire image is shown on the screen). I can't seem to figure out a way to have the picture position intelligently change in my background slideshow so that the wallpapers always look good. Any insight?

    Read the article

  • Reading a file used by another process

    - by Tophe
    I know this has been up for discussion but I can't find an answer to my specific problem. I am monitoring a text file that is being written to by a server program. Every time the file is changed the content will be outputted to a window in my program. The problem is of course, that I can't use the Streamreader on the file as it is being "used by another process". Also setting up a Filestream with ReadWrite won't do any good since I cannot control the process that is using the file. However... You CAN open the file in notepad, so somehow it must be possible to access it even though the server is using it. Is there a good way around this? Should I monitor the file, make a temp copy of it when it changes, read the temp copy and the delete the temp copy. I need to get hold of the text in the file whenever the server changes it.

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • Force ntpd to make changes in smaller steps

    - by David Wolever
    The NTP documentation says: Under ordinariy conditions, ntpd adjusts the clock in small steps so that the timescale is effectively continuous and without discontinuities - http://doc.ntp.org/4.1.0/ntpd.htm However, this is not at all what I have noticed in practice. If I manually change the system time backwards or forwards 5 or 10 seconds then start ntpd, I notice that it adjusts the clock in one shot. For example, with this code: #!/usr/bin/env python import time last = time.time() while True: time.sleep(1) print time.time() - last last = time.time() When I first change the time, I'll notice something like: 1.00194311142 8.29711604118 1.0010509491 Then when I start NTPd, I'll see something like: 1.00194311142 -8.117301941 1.0010509491 Is there any way to force ntpd to make the adjustments in smaller steps?

    Read the article

  • Can I create a DC without a DNS Server?

    - by onik
    So as the title says, I need to promote a standalone Win2008R2 server to a Domain Controller, and I don't a DNS Server (I think), as there will be no clients connected to the domain, it will be only used for Remote Desktop Services. Yes, I know, it's considered bad practice to install other roles on the DC, but in this case, it's necessary. Do I need to install the DNS Server, and if I do, how to make it as transparent as possible? EDIT: Seems that I need to install the DNS Server, so I can I configure it not to mess up my entire domain? For example: The server I need to promote is rdc.mydomain.com, and it has an A entry to it's IP in the current DNS, while other servers under mydomain.com are running Linux and don't need to know anything about this Windows box. The domain uses a third-party DNS and all edits and updates need to be done via a separate web page, our servers don't have write/update access.

    Read the article

  • Has anyone figured out how to use same username with different passwords (Windows)

    - by Coder
    Tried Googling, tried net use, and anything I could, with no results. I have a PC with users, and I have a network server with shared folders. For some users the usernames of the share and local account match, but the passwords are different (a good security practice). Unfortunately, Windows doesn't want to remap the drives on login, and asks for credentials when I try to connect. If I enter the password, the connection succeeds, but it still fails on next login, even if I have checked the "remember" checkbox. On PC: usera@machinea pass1 On NAS: usera@nas pass2 net use z: \\nasip\usera /user:nasip\usera pass2 /persistent:yes Credential store seems to have the user credentials stored... But the mapping fails all the time.

    Read the article

  • Only ONE Outlook 2010 installation "Cannot connect to Exchange server" when setting up new profile.

    - by Johnny PDEX
    Exchange 2010, one-server installation (small production, I know not best practice) OWA Connectivity has been confirmed, Autodiscover is configured and working properly for EVERY other installation. Other user accounts tested on problem Outlook, none can connect. Windows Firewall is pre-configured by Group Policy, only modifications being related to remote management. Firewall has also been disabled during diagnostic period. Network discovery and file sharing is enabled on workstation as well. Windows 7 Professional, latest updates installed. Driving me nuts. Help, serverfault?

    Read the article

  • configuring linux server to send traffic to local machines using local IP address

    - by gkdsp
    Two linux servers, server1 and server2, are on the same local network (they also have access to an external network). Server2 has a local IP of 192.168.0.2 and a host name of host2.mydomain.com. QUESTION 1: If an application on server1 sends traffic to server2 using a host name of host2.mydomain.com, what determines whether this traffic is routed to server2 using the local or external network? QUESTION 2: To ensure that all traffic sent from server1 to server2 always uses the local network, could I simply include in the server1 /etc/hosts file the following? 192.168.0.2 host2.mydomain.com ...the thinking being, if the servers are always on the same network there should never be a need for server2 to send traffic to server1 via the external network (that I can think of anyway). Is this done in practice, or is some other method preferred?

    Read the article

  • Design Solution For Storing-Fetching Images

    - by Chaitanya
    This is a design doubt am facing, I have a collection of 1500 images which are to be displayed on an asp.net page, the images to be displayed differ from one page to another, the count of these images will increase in the time to come, a.) is it a good idea to have the images on the database, but the round trip time to fetch the images from the database might be high. b.) is it good to have all the images on a directory, and have a virtual file system over it, and the application will access the images from the directory Do we have in particular any design strategy in a traditional database for fetching images with the least round trip time, does any solution other than usage of a traditional database exists? ps: I use SQL Server to store these images.

    Read the article

  • Method of documentation for SQL Stored Procedures

    - by Chapso
    I work in a location where a single person is responsible for creating and maintaining all stored procedures for SQL servers, and is the conduit between software developers and the database. There are a lot of stored procedures in place, and with a database diagram it is simple enough 90% of the time to figure out what the stored procedure needs for arguments/returns as output. For the other 10% of the time, however, it would be helpful to have a reference. Since the DBA is a busy guy (aren't we all), it would be good to have some program which documents the stored procedures to a file so that the developers can see it without being able to access the SPs themselves. The question is, does anyone know of a good program to accomplish this? Basically what we need is something that gives the name of the SP, the argument list and the output, both with datatypes and a nullable flag.

    Read the article

  • DNS Resolver Speed Techniques

    - by Rob Olmos
    I recently received a reply to my concerns about some DNS servers being slower than others despite all servers being anycast: In practice, most resolvers won't be impacted by the slower paths to some of the name servers in the set. Most resolvers employ various techniques to provide fast lookups, such as preferring name servers that were previously seen to be faster, sending simultaneous queries to multiple name servers, or pre-fetching queries before the TTL has expired. I was not aware that resolvers used these techniques and I was unsuccessful at searching for more info about this. Are there any names for these techniques? Which resolvers employ which of these techniques?

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • Git: What is a tracking branch?

    - by jerhinesmith
    Can someone explain a "tracking branch" as it applies to git? Here's the definition from git-scm.com: A 'tracking branch' in Git is a local branch that is connected to a remote branch. When you push and pull on that branch, it automatically pushes and pulls to the remote branch that it is connected with. Use this if you always pull from the same upstream branch into the new branch, and if you don't want to use "git pull" explicitly. Unfortunately, being new to git and coming from SVN, that definition makes absolutely no sense to me. I'm reading through "The Pragmatic Guide to Git" (great book, by the way), and they seem to suggest that tracking branches are a good thing and that after creating your first remote (origin, in this case), you should set up your master branch to be a tracking branch, but it unfortunately doesn't cover why a tracking branch is a good thing or what benefits you get by setting up your master branch to be a tracking branch of your origin repository. Can someone please enlighten me (in English)?

    Read the article

  • If don't own proprietary database engine, what is best way to convert database to mysql?

    - by John Robertson
    I work for a very small company. I was recently faced with the question of whether there is a good way to convert a proprietary database to a MySQL database without owning the proprietary database engine e.g. if one is given a large oracle database file (or choose your favorite proprietary database engine format), but doesn't have a license for the oracle database engine, is there a good, perfectly reliable way to convert it to a MySQL database format that can be read with the MySQL database engine? My question is very vague as to which proprietary format is the source just because there would be multiple sources and it looks like they would be "various and sundry". My suspicion is that there is no perfectly reliable way, especially for a wide variety of proprietary databases. If there are a few proprietary formats for which this is possible, I would still be interested in knowing, though "various and sundry" is probably the real issue. Minimizing cost, effort and correct conversion are key so I think this is probably is the not possible list. -John

    Read the article

  • Which faces technology for use with GlassFish 2.1 and NetBeans 6.7?

    - by SteJav
    I'm running GlassFish 2.1 and using NetBeans 6.7. I'd like to create a web interface to my data using JSF 1.2. Trouble is, I'm not sure which 'faces' technology to learn (that includes some good documentation). JBoss/RichFaces seem pretty good on documentation, but I'm using GlassFish. Any thoughts? The choices appear overwhelming: Tomahawk Tobago Trinidad ICEfaces RCFaces Netadvantage WebGalileoFaces QuipuKit BluePrints Woodstock JBoss RichFaces Ajax4jsf ILOG Oracle ADF G4JSF Simplica Backbase jenia4faces VisualWebPack DynaFaces IBM Impl Dinamica Mojarra PrimeFaces jQuery OpenFaces ZK ExtJS Anybody had any experience with any of the above and found the documentation to be clear to a beginner? Being a JSF/Web beginner, I tried some ICEFaces, Mojarra tutorials and had a go at getting RichFaces working with NBeans and GlassFish, but no luck. Lots of XML complaints. I'm clearly missing some huge chunks of configuration, but I can't find any documentation to help me. Any suggestions would be much appreciated :-)

    Read the article

  • llvm/clang re-compilation with itself

    - by teppic
    After reading many questions on here, I decided to give clang a go, and installed the svn version on Ubuntu 12.04 (64bit). I was expecting issues, but it all installed smoothly with no warnings. I noticed though that when re-running the configure script, if clang/clang++ is in your path it will choose this over gcc/g++ for its own compilation. Is it a good idea to recompile llvm/clang with itself? I know this is absolutely standard with gcc, but I've read that clang's C++ implementation isn't quite good enough yet (maybe this is out of date info...).

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >