Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 363/422 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • using EventToCommand & PassEventArgsToCommand :: how to get sender, or better metaphor?

    - by JoeBrockhaus
    The point of what I'm doing is that there are a lot of things that need to happen in the viewmodel, but when the view has been loaded, not on constructor. I could wire up event handlers and send messages, but that just seems kinda sloppy to me. I'm implementing a base view and base viewmodel where this logic is contained so all my views get it by default, hopefully. Perhaps I can't even get what I'm wanting: the sender. I just figured this is what RoutedEventArgs.OriginalSource was supposed to be? [Edit] In the meantime, I've hooked up an EventHandler in the xaml.cs, and sure enough, OriginalSource is null there as well. So I guess really I need to know if it's possible to get a reference to the view/sender in the Command as well? [/Edit] My implementation requires that a helper class to my viewmodels which is responsible for creating 'windows' knows of the 'host' control that all the windows get added to. i'm open to suggestions for accomplishing this outside the scope of using eventtocommand. :) (the code for Unloaded is the same) #region ViewLoadedCommand private RelayCommand<RoutedEventArgs> _viewLoadedCommand = null; /// <summary> /// Command to handle the control's Loaded event. /// </summary> public RelayCommand<RoutedEventArgs> ViewLoadedCommand { get { // lazy-instantiate the RelayCommand on first usage if (_viewLoadedCommand == null) { _viewLoadedCommand = new RelayCommand<RoutedEventArgs>( e => this.OnViewLoadedCommand(e)); } return _viewLoadedCommand; } } #endregion ViewLoadedCommand #region View EventHandlers protected virtual void OnViewLoadedCommand(RoutedEventArgs e) { EventHandler handler = ViewLoaded; if (handler != null) { handler(this, e); } } #endregion

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Sending an email with browser capabilities and screen size etc.

    - by talkingnews
    A lot of my visitors are blind (with it being a site for the blind), and often when trying to diagnose problems, I'd like to know what version of browser etc they're using, whether flash is installed. Because more often than not, someone will swear they are using X, when in fact Y is installed. Currently, I'm using http://jsbrwsniff.sourceforge.net/usage.html piped into an email, but I've got 2 problems here: First of all, jsbrwsniff is quite "heavy" and hasn't been updated since early 2007, so there's a lot of -1's in the result. Secondly, if I call it as follows, the page reloads: <a href="#" onclick="sendEmail()">Email feedback</a> And if I call it like this, the page goes blank and looks like it's trying to infinitely load a blank page: <a href="javascript:sendEmail()">Email feedback</a> See the nightmare for yourself here: http://kingston.talking-newspapers.co.uk/ Now, I know there are 1001 articles and comments here and elsewhere saying "don't use browser sniffers, they can be spoofed (etc)", but honestly, you'll have to trust me that this is a significantly useful tool when you're talking someone in their more "senior years" and using a screenreader through "help about", when they've clicked the wrong window to start with! I'm using jquery anyway in the site, and I'm aware of $jQuery.browser and $jQuery.support, but these don't tell me the elements I need (like whether Flash is installed, and what version etc). I've looked everywhere for a jquery plugin for my needs, with no luck. Finally, if I have to stick to the current method of jsbrwsniff then it's not the end of the world, but if anyone knows a way of launching the user's email client populated with the information I need but WITHOUT refreshing or blanking the page, I'd love to know. BTW - there's a good reason for not using a webform, which is simply because it's easier for the screen-reader user to use an email client they are used to. Thanks!

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Should a setter return immediately if assigned the same value?

    - by Andrei Rinea
    In classes that implement INotifyPropertyChanged I often see this pattern : public string FirstName { get { return _customer.FirstName; } set { if (value == _customer.FirstName) return; _customer.FirstName = value; base.OnPropertyChanged("FirstName"); } } Precisely the lines if (value == _customer.FirstName) return; are bothering me. I've often did this but I am not that sure it's needed nor good. After all if a caller assigns the very same value I don't want to reassign the field and, especially, notify my subscribers that the property has changed when, semantically it didn't. Except saving some CPU/RAM/etc by freeing the UI from updating something that will probably look the same on the screen/whatever_medium what do we obtain? Could some people force a refresh by reassigning the same value on a property (NOT THAT THIS WOULD BE A GOOD PRACTICE HOWEVER)? 1. Should we do it or shouldn't we? 2. Why?

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Native functions throw UnsatisfiedLinkError in custom view, despite working in main activity

    - by Mark Ingram
    For some reason I can only call native functions from my main activity and not any custom views that I've created. Here is an example file (I followed a tutorial, but renamed the classes http://mindtherobot.com/blog/452/android-beginners-ndk-setup-step-by-step/) See the usage of the native function "getNewString". package com.example.native; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.graphics.Bitmap; import android.graphics.Canvas; import android.os.Bundle; import android.view.View; public class NativeTestActivity extends Activity { static { System.loadLibrary("nativeTest"); } private native String getNewString(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(new BitmapView(this)); String hello = getNewString(); // This line works fine new AlertDialog.Builder(this).setMessage(hello).show(); } } class BitmapView extends View { static { System.loadLibrary("nativeTest"); } private native String getNewString(); public BitmapView(Context context) { super(context); String hello = getNewString(); // This line throws the UnsatisfiedLinkError new AlertDialog.Builder(this.getContext()).setMessage(hello).show(); } } How can I call native functions in my custom views? I've built the application as an Android 2.2 app. I'm running the application on my HTC Desire. I have the latest SDK (9) and latest NDK (r5).

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • Custom model validation of dependent properties using Data Annotations

    - by Darin Dimitrov
    Since now I've used the excellent FluentValidation library to validate my model classes. In web applications I use it in conjunction with the jquery.validate plugin to perform client side validation as well. One drawback is that much of the validation logic is repeated on the client side and is no longer centralized at a single place. For this reason I'm looking for an alternative. There are many examples out there showing the usage of data annotations to perform model validation. It looks very promising. One thing I couldn't find out is how to validate a property that depends on another property value. Let's take for example the following model: public class Event { [Required] public DateTime? StartDate { get; set; } [Required] public DateTime? EndDate { get; set; } } I would like to ensure that EndDate is greater than StartDate. I could write a custom validation attribute extending ValidationAttribute in order to perform custom validation logic. Unfortunately I couldn't find a way to obtain the model instance: public class CustomValidationAttribute : ValidationAttribute { public override bool IsValid(object value) { // value represents the property value on which this attribute is applied // but how to obtain the object instance to which this property belongs? return true; } } I found that the CustomValidationAttribute seems to do the job because it has this ValidationContext property that contains the object instance being validated. Unfortunately this attribute has been added only in .NET 4.0. So my question is: can I achieve the same functionality in .NET 3.5 SP1? UPDATE: It seems that FluentValidation already supports clientside validation and metadata in ASP.NET MVC 2. Still it would be good to know though if data annotations could be used to validate dependent properties.

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • What happened to the .NET version definition with v4.0?

    - by Tom Tresansky
    I'm building a C# class library, and using the beta 2 of Visual Web Developer/Visual C# 2010. I'm trying to save information about what version of .NET the library was built under. In the past, I was able to use this: // What version of .net was it built under? #if NET_1_0 public const string NETFrameworkVersion = ".NET 1.0"; #elif NET_1_1 public const string NETFrameworkVersion = ".NET 1.1"; #elif NET_2_0 public const string NETFrameworkVersion = ".NET 2.0"; #elif NET_3_5 public const string NETFrameworkVersion = ".NET 3.5"; #else public const string NETFrameworkVersion = ".NET version unknown"; #endif So I figured I could just add: #elif NET_4_0 public const string NETFrameworkVersion = ".NET 4.0"; Now, in Project-Properties, my target Framework is ".NET Framework 4". If I check: Assembly.GetExecutingAssembly().ImageRuntimeVersion I can see my runtime version is v4.0.21006 (so I know I have .NET 4.0 installed on my CPU). I naturally expect to see that my NETFrameworkVersion variable holds ".NET 4.0". It does not. It holds ".NET version unknown". So my question is, why is NET_4_0 not defined? Did the naming convention change? Is there some simple other way to determine .NET framework build version in versions 3.5?

    Read the article

  • What would you write in a consitution (law book) for a programmers' country?

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of island so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • Problem using Lazy<T> from within a generic abstract class

    - by James Black
    I have a generic class that all my DAO classes derive from, which is defined below. I also have a base class for all my entities, but that is not generic. The method GetIdOrSave is going to be a different type than how I defined SabaAbstractDAO, as I am trying to get the primary key to fulfill the foreign key relationships, so this function goes out to either get the primary key or save the entity and then get the primary key. The last code snippet has a solution on how it will work if I get rid of the generic part, so I think this can be solved by using variance, but I can't figure out how to write an interface that will compile. public abstract class SabaAbstractDAO<T> { ... public K GetIdOrSave<K>(K item, Lazy<SabaAbstractDAO<BaseModel>> lazyitemdao) where K : BaseModel { if (item != null && item.Id.IsNull()) { var itemdao = lazyitemdao.Value; item.Id = itemdao.retrieveID(item); if (item.Id.IsNull()) { return itemdao.SaveData(item); } } return item; } } I am getting this error, when I try to compile: Argument 2: cannot convert from 'System.Lazy<ORNL.HRD.LMS.Dao.SabaCourseDAO>' to 'System.Lazy<ORNL.HRD.LMS.Dao.SabaAbstractDAO<ORNL.HRD.LMS.Models.BaseModel>>' I am trying to call it this way: GetIdOrSave(input.OfferingTemplate, new Lazy<SabaCourseDAO>( () => { return new SabaCourseDAO() { Dao = Dao }; }) ); If I change the definition to this, it works. public K GetIdOrSave<K>(K item, Lazy<SabaCourseDAO> lazyitemdao) where K : BaseModel { So, how can I get this to compile using variance (if needed) and generics, so I can have a very general method that will only work with BaseModel and AbstractDAO<BaseModel>? I expect I should only need to make the change in the method and perhaps abstract class definition, the usage should be fine.

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • Excel 2010 64 bit can't create .net object

    - by aboes81
    I have a simple class library that I use in Excel. Here is a simplification of my class... using System; using System.Runtime.InteropServices; namespace SimpleLibrary { [ComVisible(true)] public interface ISixGenerator { int Six(); } public class SixGenerator : ISixGenerator { public int Six() { return 6; } } } In Excel 2007 I would create a macro enabled workbook and add a module with the following code: Public Function GetSix() Dim lib As SimpleLibrary.SixGenerator lib = New SimpleLibrary.SixGenerator Six = lib.Six End Function Then in Excel I could call the function GetSix() and it would return six. This no longer works in Excel 2010 64bit. I get a Run-time error '429': ActiveX component can't create object. I tried changing the platform target to x64 instead of Any CPU but then my code wouldn't compile unless I unchecked the Register for COM interop option, doing so makes it so my macro enable workbook cannot see SimpleLibrary.dll as it is no longer regsitered. Any ideas how I can use my library with Excel 2010 64 bit?

    Read the article

  • Modify XML existing content in C#

    - by Nano HE
    Purpose: I plan to Create a XML file with XmlTextWriter and Modify/Update some Existing Content with XmlNode SelectSingleNode(), node.ChildNode[?].InnerText = someting, etc. After I created the XML file with XmlTextWriter as below. XmlTextWriter textWriter = new XmlTextWriter("D:\\learning\\cs\\myTest.xml", System.Text.Encoding.UTF8); I practiced the code below. But failed to save my XML file. XmlDocument doc = new XmlDocument(); doc.Load("D:\\learning\\cs\\myTest.xml"); XmlNode root = doc.DocumentElement; XmlNode myNode; myNode= root.SelectSingleNode("descendant::books"); .... textWriter.Close(); doc.Save("D:\\learning\\cs\\myTest.xml"); I found it is not good to produce like my way. Is there any suggestion about it? I am not clear about the concepts and usage of both XmlTextWriter and XmlNode in the same project. Thank you for reading and replies.

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Why does Perl's DBI complain about "failed: ERROR OCIEnvNlsCreate" when I try to connect to Oracle 1

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple Perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before?

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • Special technology needed for browser based chat?

    - by orokusaki
    On this post, I read about the usage of XMPP. Is this sort of thing necessary, and more importantly, my main question expanded: Can a chat server and client be built efficiently using only standard HTTP and browser technologies (such as PHP and JS, or RoR and JS, etc)? Or, is it best to stick with old protocols like XMPP find a way to integrate them with my application? I looked into CampFire via LiveHTTPHeaders and Firebug for about 5 minutes, and it appears to use Ajax to send a request which is never answered until another chat happens. Is this just CampFire opening a new thread on the server to listen for an update and then returning a response to the request when the thread hears an update? I noticed that they're requesting on a specific port (8043 if memory serves me) which makes me think that they're doing something more complex than just what I mentioned. Also, the URL requested started with /tcp/ which I found interesting. Note: I don't expect to ever have more than 150 users live-chatting in all the rooms combined at the same time. I understand that if I was building a hosted pay for chat service like CampFire with thousands of concurrent users, it would behoove me to invest time in researching special technologies vs trying to reinvent the wheel in a simple way in my app. Also, if you're going to do it with server polling, how often would you personally poll to maximize response without slamming the server?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >