Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 682/837 | < Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >

  • how do I download a large file (via HTTP) in .NET

    - by nickcartwright
    I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk. Does anyone know how I could do this?

    Read the article

  • Multiple column foreign key contraints

    - by eugene4968
    I want to setup table constraints for the following scenario and I’m not sure how to do it or if it’s even possible in SQL Server 2005. I have three tables A,B,C. C is a child of B. B will have a optional foreign key(may be null) referencing A. For performance reasons I also want table C to have the same foreign key reference to table A. The constraint on table C should be that C must reference its parent (B) and also have the same foreign key reference to A as its parent. Anyone have any thoughts on how to do this?

    Read the article

  • Why I should use an N-Tier Approach When using an SqlDatasource is ALOT EASIER ?

    - by The_AlienCoder
    When it comes to web development I have always tried to work SMART not HARD. So for along time My Aproach to interacting with databases in my AspNet projects has been this : 1) Create my stored procedures 2) Drag an SQLDatasource control on my aspx page 3) Bind a DataList Control to my SQLDatasource 4) Insert, Update & Delete by using my Datalist or programmatically using built in SQLDatasource methods e.g MySqlDataSource.InsertParameters["author"].DefaultValue = TextBox1.Text; MySqlDataSource.Insert(); Recently however I got a relatively easy web project. So I decided to employ a 3-tier Model...But I got exhausted halfway and just didnt seem worth it ! It seemed like I was working too HARD for a project that could have been easily accomplished by a couple of SqlDataSource Controls. So Why Is the N-Tier Model better than my Approach? Has it anything to do with performance? What are the advantages of the ObjectDataSource control over the SqlDataSource Control?

    Read the article

  • How do I run D3D9 programs (that have already been compiled) on a machine without the SDK?

    - by rambo
    I have a simple 3D application programmed in C++ and D3D9 using MSVC++ 2008 Express. Some weeks ago, I had to format my hard disk, so the DirectX SDK is not currently installed. However, I found that the exe file that I found in my "Debug" folder for the project does not run. The error it gives is: "This application has failed to start because d3dx9d_38.dll was not found. Re-installing the application may fix this problem." Of course, it worked after I installed the SDK. Then I compiled a "release build" thinking that that was the solution. Then I uninstalled the SDK and tried to run the .exe file. Still gave me the error. So how does one make such .exe files run on machines without the SDK?

    Read the article

  • Background subtracting in MATLAB

    - by eiphyomin
    I'm looking to do background subtracting on an image. I'm new to MATLAB and new to image processing/analysis, so sorry if any of this sounds stupid. 1) Other than imsubtract() are there other ways to do background subtracting (besides comparing one image to another)? 2) In the Math Works explanation for imsubtract() why do they make their structuring element a disk? This seems rather difficult so far because every time I try something, I end up not only subtracting the noisy background but also losing the parts of the image I want to look at!

    Read the article

  • ASP SaveToDisk method takes an incredible amount of time

    - by burnt_hand
    This is a method in ASP Classic that saves a file to disk. It takes a very long time but I'm not sure why. Normally, I wouldn't mind so much, but the files it handles are pretty large so need this needs to faster than 100kB a second save. Seriously slow. (old legacy system, band aid fix till it gets replaced...) Public Sub SaveToDisk(sPath) Dim oFS, oFile Dim nIndex If sPath = "" Or FileName = "" Then Exit Sub If Mid(sPath, Len(sPath)) <> "\" Then sPath = sPath & "\" '" Set oFS = Server.CreateObject("Scripting.FileSystemObject") If Not oFS.FolderExists(sPath) Then Exit Sub Set oFile = oFS.CreateTextFile(sPath & FileName, True) For nIndex = 1 to LenB(FileData) oFile.Write Chr(AscB(MidB(FileData,nIndex,1))) Next oFile.Close End Sub I'm asking because there are plenty of WTF's in this code so I'm fighting those fires while getting some help on these ones.

    Read the article

  • Serialze an Object to a String

    - by Vaccano
    I have the following method to save an Object to a file: // Save an object out to the disk public static void SerializeObject<T>(this T toSerialize, String filename) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); TextWriter textWriter = new StreamWriter(filename); xmlSerializer.Serialize(textWriter, toSerialize); textWriter.Close(); } I confess I did not write it (I only converted it to a extension method that took a type parameter). Now I need it to give the xml back to me as a string (rather than save it to a file). I am looking into it, but I have not figured it out yet. I thought this might be really easy for someone familiar with these objects. If not I will figure it out eventually.

    Read the article

  • How do search engines see dynamic profiles?

    - by Lumpy
    Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name? As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.

    Read the article

  • WPF databind in memory image to Image control

    - by Ready Cent
    I am using a DataGrid and trying to do the following Databinding <DataTemplate> <Grid> <Image> <Image.Source> <BitmapImage UriSource="{Binding Data.CustomImage}" CacheOption="OnLoad" /> </Image.Source> </Image> </Grid> </DataTemplate> CustomImage is of type BitmapImage. When I run I get the error: Initialization of 'System.Windows.Media.Imaging.BitmapImage' threw an exception. The thing is that these images are stored as resources in a different assembly so I can't just point to a location on disk

    Read the article

  • WPF toolkit DataGrid show filds even with browsable attribute set to false

    - by Jonathan
    Hi Hi have an observable collection that I bind to a DataGrid using the itemsource property of the DataGrid. All the properties of the class inside the collection are displayed properly in the DataGrid. Now, I want to hide some fields to the DataGrid using the browsable attribute [Browsable(false)] in the class. It works well in winforms, but it seems not working in WPF. Someone knows why? I can hide the columns later, but I don't want to loss performance in this way. Is there any other solution? Thanks.

    Read the article

  • Django: common template subsections

    - by Parand
    What's a good way to handle commonly occurring subsections of templates? For example, there is a sub-header section that's used across 4 different pages. The pages are different enough to not work well with template inheritance (ie. "extends" doesn't fit well). Is "include" the recommended method here? It feels a bit heavyweight, requiring each subsection or snippet to be in its own file. Are there any performance issues in using include or is it smart about forming template from the subsections (ie. if I make extensive use of it, do I pay any penalties)? I think what I'm looking for is something like template tags, but without the programming - a simple way to create a library of html template tags I can sprinkle in other templates.

    Read the article

  • Sharepoint as a replacement for N-Tiers Applications and OLTP Databases

    - by user264892
    All, At my current company, we are looking to replace all ASP.NET Applications and OLTP databases with Sharepoint 2007. Our applications and databases deal with 10,000+ rows, and we have 5,000 + clients actively using the system. Our Implementation of sharepoint would replace all n-tier applications. Does anyone have an experience in implementing this? My current viewpoint is that Sharepoint is not built for or adequate enough to handle this type of application. Can it really replace application with hundreds of pages, and hundreds of tables? Support Data warehousing operations? Support high performance OLTP operations? Provide a robust development environment? Any and all input is greatly appreciated. Thanks S.O. Community.

    Read the article

  • A non-blocking server with java.io

    - by Jon
    Everybody knows that java IO is blocking, and java NIO is non-blocking. In IO you will have to use the thread per client pattern, in NIO you can use one thread for all clients. Now my question follows: is it possible to make a non-blocking design using only the Java IO api. (not NIO) I was thinking about a pattern like this (obviously very simplified); List<Socket> li; for (Socket s : li) { InputStream in = s.getInputStream(); byte[] data = in.available(); in.read(data); // processData(data); (decoding packets, encoding outgoing packets } Also note that the client will always be ready for reading data. What are your opinions on this? Will this be suitable for a server that should at least hold a few hundred of clients without major performance issues?

    Read the article

  • What is the most "database independent" way of creating a variable length text field in a database

    - by Thibaut Colar
    I want to create a text field in the database, with no specific size (it will store text of length unknown in some case) - the particular text are serialized simple object (~ JSON) What is the most database independent way to do this : - a varchar with no size specified (don't think all db support this) - a 'text' field, this seems to be common, but I don't believe it's a standard - a blob or other object of that kind ? - a varchar of a a very large size (that's inefficient and wastes disk space probably) - Other ? I'm using JDBC, but I'd like to use something that is supported in most DB (oracle, mysql, postgresql, derby, HSQL, H2 etc...) Thanks.

    Read the article

  • Stopwatch vs. using System.DateTime.Now for timing events

    - by Randy Minder
    I wanted to track the performance of a piece of my application so I initially stored the start time using System.DateTime.Now and the end time also using System.DateTime.Now. The difference between the two was how long my code took to execute. I noticed though that the difference didn't appear to be accurate. So I tried using a Stopwatch object. This turned out to be much, much more accurate. Can anyone tell me why Stopwatch would be more accurate than calculating the difference between a start and end time using System.DateTime.Now? Thanks.

    Read the article

  • list(of byte) to Picturebox

    - by michael
    I have a jpeg file that is being held as a list(of Byte) Currently I have code that I can use to load and save the jpeg file as either a binary (.jpeg) or a csv of bytes (asadsda.csv). I would like to be able to take the list(of Byte) and convert it directly to a Picturebox without saving it to disk and then loading it to the picturebox. If you are curious, the reason I get the picture file as a list of bytes is because it gets transfered over serial via an industrial byte oriented protocol as just a bunch of bytes. I am using VB.net, but C# example is fine too.

    Read the article

  • C++ vector and struct problem win32

    - by ~james2432
    I have a structure defined in my header file: struct video { wchar_t* videoName; std::vector<wchar_t*> audio; std::vector<wchar_t*> subs; }; struct ret { std::vector<video*> videos; wchar_t* errMessage; }; struct params{ HWND form; wchar_t* cwd; wchar_t* disk; ret* returnData; }; When I try to add my video structure to a vector of video* I get access violation reading 0xcdcdcdc1 (videoName is @ 0xcdcdcdcd, before I allocate it) //extract of code where problem is video v; v.videoName = (wchar_t*)malloc((wcslen(line)+1)*sizeof(wchar_t)); wcscpy(v.videoName,line); p->returnData->videos.push_back(&v); //error here

    Read the article

  • Password protect web pages on Windows CE 6

    - by Chris
    I am using the default web server for WinCE 6 and wish to password protect certain folders. The default VROOT /remoteadmin/ is password protected, and this works but my configuration doesn't work. I have tried mimicking these settings on my own folders but to little success. Here is how one looks: In the HKLM\Comm\HTTPD\VROOTS key I have created a subkey called /web/configuration (this folder actually exists on the box). The following values are in this key A = 1 DefaultPage = config.html Path = /hard disk/webroot/web/configuration/ UserList = ADMIN This is nigh on identical to the settings in /RemoteAdmin/ but /RemoteAdmin/ requests a password and /web/configuration doesn't (even after reboot).

    Read the article

  • Compare images to find differences

    - by _simon_
    Task: I have a camera mounted on the end of our assembly line, which captures images of produced items. Let's for example say, that we produce tickets (with some text and pictures on them). So every produced ticket is photographed and saved to disk as image. Now I would like to check these saved images for anomalies (i.e. compare them to an image (a template), which is OK). So if there is a problem with a ticket on our assembly line (missing picture, a stain,...), my application should find it (because its image differs too much from my template). Question: What is the easiest way to compare pictures and find differences between them? Do I need to write my own methods, or can I use existing ones? It would be great if I just set a tolerance value (i.e. images can differ for 1%), put both images in a function and get a return value of true or false :) Tools: C# or VB.NET, Emgu.CV (.NET wrapper for OpenCV) or something similar

    Read the article

  • Caching in Ruby Gem, possibly not using Rails

    - by corprew
    I am rewriting an existing Ruby Gem to include caching. This is for a gem that is relatively commonly used, and accesses a large amount of static data on a web service. Currently, I have a small number of gem users doing a large number of accesses to the service that under normal conditions would be swamping / downing the service, and we're going to put the gem up on github for general consumption. Right now, users can choose between using the rails cache mechanism, a simple disk cache, or no cache. What is best practice for letting people choose what cache to use like this (being able to use this outside of rails is a priority so i can't just bail to the underlying caching mechanism)? I'm looking for suggestions/examples for configuration and interface, especially. Thanks for your suggestions

    Read the article

  • Detecting and reloading updated application parameters at runtime

    - by VeeKayBee
    I am working on an ASP.NET web application(using .NET 4.5 and C#).The application deals with lot of units (for measuring like KG,Litre,KM etc). So based on the selected unit we have to implement some allowed range.This values can be configured without much effort. We identified two solutions for this Keeping a configuration xml. Suppose the values in xml, does it requires an iisreset or any other thing which can take the site down for some time, if we are changing the xml file to change some validation. Keeping in Db, then use SQL dependency caching. So an update to DB can reflect the caching values.SO i believe if we change the values, it will update the cache. How much complex is this and does it effect the performance ? It will be great helpful, if we have some other method to achieve this. Thanks in advance.

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • iPhone "multi-threading" question

    - by MrDatabase
    I have a simple iPhone game consisting of two "threads": the main game loop where all updating and rendering happen 30 times per second (NSTimer)... and the "thread" that calls the accelerometer delegate 100 times per second. I have a variable "xPosition" that's updated in the accelerometer delegate function and used in the game loop. Is there a possibility of the two "threads" trying to use xPosition at the same time (hence causing a crash or some other problem). If so how can I fix this w/ minimal impact to the game's performance? I've been using this set-up for many months of development and incremental testing and I've never run into any problems. Cheers!

    Read the article

  • Why SQL functions are faster than UDF

    - by Zerotoinfinite
    Though it's a quite subjective question but I feel it necessary to share on this forum. I have personally experienced that when I create a UDF (even if that is not complex) and use it into my SQL it drastically decrease the performance. But when I use SQL inbuild function they happen to work pretty faster. Conversion , logical & string functions are clear example of that. So, my question is "Why SQL in build functions are faster than UDF"? and it would be an advantage if someone can guide me how can I judge/manipulate function cost either mathematically or logically.

    Read the article

  • When configuring daily backups, which files should I include to be sure I have the MySQL db's

    - by user575599
    I have a dedicated LAMP server with cpanel hosting 100 websites (some of them have MySQL db's). I am currently using the Jungle Disk Server Edition to backup our files from our LAMP server to Amazon S3. Once a week were are backing up the entire cpanel which is an enormous strain on resources but that is a separate issue. Now, what I want to do is to set up a daily job to backup just the HTML files and the MySQL db's. If I just backup the "public_html" folder will my MySQL database info be stored in that directory? Would backing up the public_html folder be enough to recover the db? I can find plenty of resources online about how to manually backup MySQL db's but with a 100 sites, I need it automated. I'm hoping for an easy solution where I can just grab a folder to backup each day.

    Read the article

< Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >