Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 381/409 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Running Network Application on port Server side give me error how can i solve it?

    - by Phsika
    if i run Server App. Exception occurs: on Dinle.Start() System.Net.SocketException - Only one usage of each socket address (protocol/network address/port) is normally permitted How can i solve this error? Server.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server { public partial class Server : Form { Thread kanal; public Server() { InitializeComponent(); try { kanal = new Thread(new ThreadStart(Dinle)); kanal.Start(); kanal.Priority = ThreadPriority.Normal; this.Text = "Kanla Çalisti"; } catch (Exception ex) { this.Text = "kanal çalismadi"; MessageBox.Show("hata:" + ex.ToString()); kanal.Abort(); throw; } } private void Server_Load(object sender, EventArgs e) { Dinle(); } private void btn_Listen_Click(object sender, EventArgs e) { Dinle(); } void Dinle() { // IPAddress localAddr = IPAddress.Parse("localhost"); // TcpListener server = new TcpListener(port); // server = new TcpListener(localAddr, port); //TcpListener Dinle = new TcpListener(localAddr,51124); TcpListener Dinle = new TcpListener(51124); try { while (true) { Dinle.Start(); Exception is occured. Socket Baglanti = Dinle.AcceptSocket(); if (!Baglanti.Connected) { MessageBox.Show("Baglanti Yok"); } else { TcpClient tcpClient = Dinle.AcceptTcpClient(); if (tcpClient.ReceiveBufferSize 0) { byte[] Dizi = new byte[250000]; Baglanti.Receive(Dizi, Dizi.Length, 0); string Yol; saveFileDialog1.Title = "Dosyayi kaydet"; saveFileDialog1.ShowDialog(); Yol = saveFileDialog1.FileName; FileStream Dosya = new FileStream(Yol, FileMode.Create); Dosya.Write(Dizi, 0, Dizi.Length - 20); Dosya.Close(); listBox1.Items.Add("dosya indirildi"); listBox1.Items.Add("Dosya Boyutu=" + Dizi.Length.ToString()); listBox1.Items.Add("Indirilme Tarihi=" + DateTime.Now); listBox1.Items.Add("--------------------------------"); } } } } catch (Exception ex) { MessageBox.Show("hata:" + ex.ToString()); } } } }

    Read the article

  • Help with Arrays in Objective C.

    - by NJTechie
    Problem : Take an integer as input and print out number equivalents of each number from input. I hacked my thoughts to work in this case but I know it is not an efficient solution. For instance : 110 Should give the following o/p : one one zero Could someone throw light on effective usage of Arrays for this problem? #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int input, i=0, j,k, checkit; int temp[i]; NSLog(@"Enter an integer :"); scanf("%d", &input); checkit = input; while(input > 0) { temp[i] = input%10; input = input/10; i++; } if(checkit != 0) { for(j=i-1;j>=0;j--) { //NSLog(@" %d", temp[j]); k = temp[j]; //NSLog(@" %d", k); switch (k) { case 0: NSLog(@"zero"); break; case 1: NSLog(@"one"); break; case 2: NSLog(@"two"); break; case 3: NSLog(@"three"); break; case 4: NSLog(@"four"); break; case 5: NSLog(@"five"); break; case 6: NSLog(@"six"); break; case 7: NSLog(@"seven"); break; case 8: NSLog(@"eight"); break; case 9: NSLog(@"nine"); break; default: break; } } } else NSLog(@"zero"); [pool drain]; return 0; }

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

  • How safe is my safe rethrow?

    - by gustafc
    (Late edit: This question will hopefully be obsolete when Java 7 comes, because of the "final rethrow" feature which seems like it will be added.) Quite often, I find myself in situations looking like this: do some initialization try { do some work } catch any exception { undo initialization rethrow exception } In C# you can do it like this: InitializeStuff(); try { DoSomeWork(); } catch { UndoInitialize(); throw; } For Java, there's no good substitution, and since the proposal for improved exception handling was cut from Java 7, it looks like it'll take at best several years until we get something like it. Thus, I decided to roll my own: (Edit: Half a year later, final rethrow is back, or so it seems.) public final class Rethrow { private Rethrow() { throw new AssertionError("uninstantiable"); } /** Rethrows t if it is an unchecked exception. */ public static void unchecked(Throwable t) { if (t instanceof Error) throw (Error) t; if (t instanceof RuntimeException) throw (RuntimeException) t; } /** Rethrows t if it is an unchecked exception or an instance of E. */ public static <E extends Exception> void instanceOrUnchecked( Class<E> exceptionClass, Throwable t) throws E, Error, RuntimeException { Rethrow.unchecked(t); if (exceptionClass.isInstance(t)) throw exceptionClass.cast(t); } } Typical usage: public void doStuff() throws SomeException { initializeStuff(); try { doSomeWork(); } catch (Throwable t) { undoInitialize(); Rethrow.instanceOrUnchecked(SomeException.class, t); // We shouldn't get past the above line as only unchecked or // SomeException exceptions are thrown in the try block, but // we don't want to risk swallowing an error, so: throw new SomeException("Unexpected exception", t); } private void doSomeWork() throws SomeException { ... } } It's a bit wordy, catching Throwable is usually frowned upon, I'm not really happy at using reflection just to rethrow an exception, and I always feel a bit uneasy writing "this will not happen" comments, but in practice it works well (or seems to, at least). What I wonder is: Do I have any flaws in my rethrow helper methods? Some corner cases I've missed? (I know that the Throwable may have been caused by something so severe that my undoInitialize will fail, but that's OK.) Has someone already invented this? I looked at Commons Lang's ExceptionUtils but that does other things. Edit: finally is not the droid I'm looking for. I'm only interested to do stuff when an exception is thrown. Yes, I know catching Throwable is a big no-no, but I think it's the lesser evil here compared to having three catch clauses (for Error, RuntimeException and SomeException, respectively) with identical code. Note that I'm not trying to suppress any errors - the idea is that any exceptions thrown in the try block will continue to bubble up through the call stack as soon as I've rewinded a few things.

    Read the article

  • Constructor Injection and when to use a Service Locator

    - by Simon
    I'm struggling to understand parts of StructureMap's usage. In particular, in the documentation a statement is made regarding a common anti-pattern, the use of StructureMap as a Service Locator only instead of constructor injection (code samples straight from Structuremap documentation): public ShippingScreenPresenter() { _service = ObjectFactory.GetInstance<IShippingService>(); _repository = ObjectFactory.GetInstance<IRepository>(); } instead of: public ShippingScreenPresenter(IShippingService service, IRepository repository) { _service = service; _repository = repository; } This is fine for a very short object graph, but when dealing with objects many levels deep, does this imply that you should pass down all the dependencies required by the deeper objects right from the top? Surely this breaks encapsulation and exposes too much information about the implementation of deeper objects. Let's say I'm using the Active Record pattern, so my record needs access to a data repository to be able to save and load itself. If this record is loaded inside an object, does that object call ObjectFactory.CreateInstance() and pass it into the active record's constructor? What if that object is inside another object. Does it take the IRepository in as its own parameter from further up? That would expose to the parent object the fact that we're access the data repository at this point, something the outer object probably shouldn't know. public class OuterClass { public OuterClass(IRepository repository) { // Why should I know that ThingThatNeedsRecord needs a repository? // that smells like exposed implementation to me, especially since // ThingThatNeedsRecord doesn't use the repo itself, but passes it // to the record. // Also where do I create repository? Have to instantiate it somewhere // up the chain of objects ThingThatNeedsRecord thing = new ThingThatNeedsRecord(repository); thing.GetAnswer("question"); } } public class ThingThatNeedsRecord { public ThingThatNeedsRecord(IRepository repository) { this.repository = repository; } public string GetAnswer(string someParam) { // create activeRecord(s) and process, returning some result // part of which contains: ActiveRecord record = new ActiveRecord(repository, key); } private IRepository repository; } public class ActiveRecord { public ActiveRecord(IRepository repository) { this.repository = repository; } public ActiveRecord(IRepository repository, int primaryKey); { this.repositry = repository; Load(primaryKey); } public void Save(); private void Load(int primaryKey) { this.primaryKey = primaryKey; // access the database via the repository and set someData } private IRepository repository; private int primaryKey; private string someData; } Any thoughts would be appreciated. Simon

    Read the article

  • deleted gen folder, eclipse isn't generating it now :(

    - by LuxuryMode
    I accidentally deleted my gen folder and now, predictably, my resources are all messed up. I just created a gen folder myself and tried to project clean - that didn't work. Tried right-clicking project and going to android tools fix project properties - didn't work. Tried unchecking build automatically...didn't work. cleaned, closed project, closed eclipse, restarted, etc, etc. Nothing is working and I keep seeing this error: gen already exists but is not a source folder. Convert to a source folder or rename it. EDIT - OK was able to generate R.java, but now I'm getting crazy stuff in the console: [2011-06-14 17:06:11 - fastapp] Conversion to Dalvik format failed with error 1 [2011-06-14 17:06:42 - fastapp] Dx trouble processing "java/awt/font/NumericShaper.class": Ill-advised or mistaken usage of a core class (java.* or javax.*) when not building a core library. This is often due to inadvertently including a core library file in your application's project, when using an IDE (such as Eclipse). If you are sure you're not intentionally defining a core class, then this is the most likely explanation of what's going on. However, you might actually be trying to define a class in a core namespace, the source of which you may have taken, for example, from a non-Android virtual machine project. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem. If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. JarJar is a tool that may help you in this endeavor. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation. [2011-06-14 17:06:42 - fastapp] Dx 1 error; aborting [2011-06-14 17:06:42 - fastapp] Conversion to Dalvik format failed with error 1 And eclipse can't resolve the import of my resources import com.me.fastapp.R;

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • Regular expression does not find the first occurrence

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Regular expression does not find the first occurance

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • pthread and recursively calling execvp in C

    - by eduke
    To begin I'm sorry for my english :) I looking for a way to create a thread each time my program finds a directory, in order to call the program itself but with a new argv[2] argument (which is the current dir). I did it successfully with fork() but with pthread I've some difficulties. I don't know if I can do something like that : #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <dirent.h> int main(int argc, char **argv) { pthread_t threadID[10] = {0}; DIR * dir; struct dirent * entry; struct stat status; pthread_attr_t attr; pthread_attr_init(&attr); int i = 0; char *res; char *tmp; char *file; if(argc != 3) { printf("Usage : %s <file> <dir>\n", argv[0]); exit(EXIT_FAILURE); } if(stat(argv[2],&status) == 0) { dir = opendir(argv[2]); file = argv[1]; } else exit(EXIT_FAILURE); while ((entry = readdir(dir))) { if (strcmp(entry->d_name, ".") && strcmp(entry->d_name, "..")) { tmp = malloc(strlen(argv[2]) + strlen(entry->d_name) + 2); strcpy(tmp, argv[2]); strcat(tmp, "/"); strcat(tmp, entry->d_name); stat(tmp, &status); if (S_ISDIR(status.st_mode)) { argv[2] = tmp; pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); printf("New thread created : %d", i); i++; } else if (!strcmp(entry->d_name, file)) { printf(" %s was found - Thread number = %d\n",tmp, i); break; } free(tmp); } } pthread_join( threadID[i] , &res ); exit(EXIT_SUCCESS); } Actually it doesn't works : pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); I have no runtime error, but when the file to find is in another directory, the thread is not created and so execvp(argv[0], argv) is not called... Thank you for you help, Simon

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • xerces-c: Xml parsing multiple files

    - by user459811
    I'm atempting to learn xerces-c and was following this tutorial online. http://www.yolinux.com/TUTORIALS/XML-Xerces-C.html I was able to get the tutorial to compile and run through a memory checker (valgrind) with no problems however when I made alterations to the program slightly, the memory checker returned some potential leak bytes. I only added a few extra lines to main to allow the program to read two files instead of one. int main() { string configFile="sample.xml"; // stat file. Get ambigious segfault otherwise. GetConfig appConfig; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; // Added code configFile = "sample1.xml"; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; return 0; } I was wondering why is it when I added the extra lines of code to read in another xml file, it would result in the following output? ==776== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info ==776== Command: ./a.out ==776== Application option A=10 Application option B=24 Application option A=30 Application option B=40 ==776== ==776== HEAP SUMMARY: ==776== in use at exit: 6 bytes in 2 blocks ==776== total heap usage: 4,031 allocs, 4,029 frees, 1,092,045 bytes allocated ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 1 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x4038E7: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 2 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x40393F: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== LEAK SUMMARY: ==776== definitely lost: 6 bytes in 2 blocks ==776== indirectly lost: 0 bytes in 0 blocks ==776== possibly lost: 0 bytes in 0 blocks ==776== still reachable: 0 bytes in 0 blocks ==776== suppressed: 0 bytes in 0 blocks ==776== ==776== For counts of detected and suppressed errors, rerun with: -v ==776== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • Why does my Perl regular expression only find the last occurrence?

    - by scharan
    I have the following input to a Perl script and I wish to get the first occurrence of NAME="..." strings in each of the <table>...</table> structures. The entire file is read into a single string and the regex acts on that input. However, the regex always returns the last occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >