Search Results

Search found 23661 results on 947 pages for 'worse is better'.

Page 188/947 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • C++: Static variable inside a constructor, are there any drawbacks or side effects?

    - by doc
    What I want to do: run some prerequisite code whenever instance of the class is going to be used inside a program. This code will check for requiremts etc. and should be run only once. I found that this can be achieved using another object as static variable inside a constructor. Here's an example for a better picture: class Prerequisites { public: Prerequisites() { std::cout << "checking requirements of C, "; std::cout << "registering C in dictionary, etc." << std::endl; } }; class C { public: C() { static Prerequisites prerequisites; std::cout << "normal initialization of C object" << std::endl; } }; What bothers me is that I haven't seen similar use of static variables so far. Are there any drawbacks or side-effects or am I missing something? Or maybe there is a better solution? Any suggestions are welcome.

    Read the article

  • Double join with habtm in ActiveRecord

    - by Daniel Huckstep
    I have a weird situation involving the need of a double inner join. I have tried the query I need, I just don't know how to make rails do it. The Data Account (has_many :sites) Site (habtm :users, belongs_to :account) User (habtm :sites) Ignore that they are habtm or whatever, I can make them habtm or has_many :through. I want to be able to do @user.accounts or @account.users Then of course I should be able to do @user.accounts < @some_other_account And then have @user.sites include all the sites from @some_other_account. I've fiddled with habtm and has_many :through but can't get it to do what I want. Basically I need to end up with a query like this (copied from phpmyadmin. Tested and works): SELECT accounts.* FROM accounts INNER JOIN sites ON sites.account_id = accounts.id INNER JOIN user_sites ON sites.id = user_sites.site_id WHERE user_sites.user_id = 2 Can I do this? Is it even a good idea to have this double join? I am assuming it would work better if users had the association with accounts to begin with, and then worry about getting @user.sites instead, but it works better for many other things if it is kept the way it is (users <- sites).

    Read the article

  • Cheated!! Please help

    - by Rohit K
    I was experiencing some hard disk problem with my HP laptop. It showed at bootup as system diagnostics are run -- Smart Drive attribute failed. Also i repeatedly got a warning for imminent hard drive failure and thus to back up my data. So i gave my laptop to authorized HP service center for repair. They formatted my hard disk and installed a pirated copy of Win 7 Ultimate (i earlier had genuine Win 7 Home Premium running on my laptop) and they told me they cover out-of-warranty issues and even charged me for it. What's worse is that the hard drive problem is still present and all i am left with is an illegal copy of windows which i think also voids my warranty. What should i do? I mean i did purchase a genuine Windows with my laptop, so there must be some way to reinstall it even when i don't have a genuine copy now on my machine. Can't i get legit keys to reinstall Win 7 from Microsoft because i did pay for the software when i purchased my machine. And if that's not possible, how can i claim warranty and get my hard disk replaced by HP?

    Read the article

  • How to construct objects based on XML code?

    - by the_drow
    I have XML files that are representation of a portion of HTML code. Those XML files also have widget declarations. Example XML file: <message id="msg"> <p> <Widget name="foo" type="SomeComplexWidget" attribute="value"> inner text here, sets another attribute or inserts another widget to the tree if needed... </Widget> </p> </message> I have a main Widget class that all of my widgets inherit from. The question is how would I create it? Here are my options: Create a compile time tool that will parse the XML file and create the necessary code to bind the widgets to the needed objects. Advantages: No extra run-time overhead induced to the system. It's easy to bind setters. Disadvantages: Adds another step to the build chain. Hard to maintain as every widget in the system should be added to the parser. Use of macros to bind the widgets. Complex code Find a method to register all widgets into a factory automatically. Advantages: All of the binding is done completely automatically. Easier to maintain then option 1 as every new widget will only need to call a WidgetFactory method that registers it. Disadvantages: No idea how to bind setters without introducing a maintainability nightmare. Adds memory and run-time overhead. Complex code What do you think is better? Can you guys suggest a better solution?

    Read the article

  • Running two graphics cards (non-SLI) to power 3D on two different monitors?

    - by Delameko
    Hi, I'm a bit clueless about this, so excuse my naivety. I have two video cards, a Nvidia 8800 and a GT120, powering three monitors. I run two 3D game instances (two Everquest 2 clients), one on each of my first two monitors. It's been running fine, although sometimes it sounds like the computer is trying to take off. Today I realised that I was actually playing them on the two monitors that are both powered by the 8800. Thinking that I might as well make use of the power of both cards I tried switching the monitor cables over so that each card would be "powering" one of the clients. (Is it silly to assume this is how it works?) This doesn't seem to have had the desired effect, as the client running on the 8800 screen is running worse than it was before. Is it even possible to run two clients on separate GPUs? Is SLI the only way to utilise 2 GPUs? Is there something special I have to do? Or do I have to set the client to use a particular GPU (an option I can't seem to find in EQ2)? I run the clients in window mode if that makes any difference, and I'm running Win 7. Thanks.

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • passing xml to a webservice

    - by Neale
    I have a simple web service that will have one method: DoTransactions(xlm) Now the reason that i am using XML as a parameter is due to the fact that the parameters will often change. So for example it could be: <payload> <userId>1234</userid> <partnerId>ptn654</partnerId> </payload> OR <payload> <partnerId>ptn654</partnerId> <items> <item1> <cost>10</cost> <description>This is item 1</description> </item1> </items> </payload> As you can see the XML string will always change (this is due to a client request) Would it be better to rather pass in a string and parse the XML in the method or should is there a better way to do it. This web service will be used for varios different code languages.

    Read the article

  • Why should I prefer OSGi Services over exported packages?

    - by Jens
    Hi, I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages? As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me. Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too. In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies. So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages. To sum my questions up: What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • HTML columns or rows for form layout?

    - by Valera
    I'm building a bunch of forms that have labels and corresponding fields (input element or more complex elements). Labels go on the left, fields go on the right. Labels in a given form should all be a specific width so that the fields all line up vertically. There are two ways (maybe more?) of achieving this: Rows: Float each label and each field left. Put each label and field in a field-row div/container. Set label width to some specific number. With this approach labels on different forms will have different widths, because they'll depend on the width of the text in the longest label. Columns: Put all labels in one div/container that's floated left, put all fields in another floated left container with padding-left set. This way the labels and even the label container don't need to have their widths set, because the column layout and the padding-left will uniformly take care of vertically lining up all the fields. So approach #2 seems to be easier to implement (because the widths don't need to be set all the time), but I think it's also less object oriented, because a label and a field that goes with that label are not grouped together, as they are in approach #1. Also, if building forms dynamically, approach #2 doesn't work as well with functions like addRow(label, field), since it would have to know about the label and the field containers, instead of just creating/adding one field-row element. Which approach do you think is better? Is there another, better approach than these two?

    Read the article

  • getting BR-separated text via DOM in JS/JQuery?

    - by Hellion
    Hi all, I am writing a greasemonkey script that is parsing a page with the following general structure: <table> <tr><td><center> <b><a href="show.php?who=IDNumber">(Account Name)</a></b> (#IDNumber) <br> (Rank) <br> (Title) <p> <b>Statistics:</b> <br> <table> <tr><td>blah blah etc. </td></tr></table></center></table> I'm specifically trying to grab the (Title) part out of that. As you can see, however, it's set off only by a <BR> tag, has no ID of its own, is just part of the text of a <CENTER> tag, and that tag has a whole raft of other text associated with it. Right now what I'm doing to get that is taking the innerHTML of the Center tag and using a regex on it to match for /<br>([A-Za-z ]*)<p><b>Statistics/. That is working okay for me, but it feels like there's gotta be a better way to pick that particular text out of there. ... So, is there a better way? Or should I complain to the site programmer that he needs to make that text more accessible? :-)

    Read the article

  • Generating HTML Programmatically in C#, Targeting Printed Reports

    - by John Passaniti
    I've taken over a C# (2.0) code base that has the ability to print information. The code to do this is insanely tedious. Elements are drawn onto each page, with magic constants representing positioning. I imagine the programmer sitting with a ruler, designing each page by measuring and typing in the positions. And yes, one could certainly come up with some nice abstractions to make this approach rational. But I am looking at a different method. The idea is that I'll replace the current code that prints with code that generates static HTML pages, saves them to a file, and then launches the web browser on that file. The most obvious benefit is that I don't have to deal with formatting-- I can let the web browser do that for me with tags and CSS. So what I am looking for is a very lightweight set of classes that I can use to help generate HTML. I don't need anything as heavyweight as HTMLTextWriter. What I'm looking for is something to avoid fragments like this: String.Format("<tr><td>{0}</td><td>{1}</td></tr>", foo, bar); And instead take have this kind of feel: ... table(). tr(). td(foo). td(bar) Or something like that. I've seen lightweight classes like that for other languages but can't find the equivalent (or better) for C#. I can certainly write it myself, but I'm a firm believer in not reinventing wheels. Know anything like this? Know anything better than this?

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Tools for debugging when debugger can't get you there?

    - by brian1001
    I have a fairly complex (approx 200,000 lines of C++ code) application that has decided to crash, although it crashes a little differently on a couple of different systems. The trick is that it doesn't crash or trap out in debugger. It only crashes when the application .EXE is run independently (either the debug EXE or the release EXE - both behave the same way). When it crashes in the debug EXE, and I get it to start debugging, the call stack is buried down into the windows/MFC part of things, and isn't reflecting any of my code. Perhaps I'm seeing a stack corruption of some sort, but I'm just not sure at the moment. My question is more general - it's about tools and techniques. I'm an old programmer (C and assembly language days), and a relative newcomer (couple/few years) to C++ and Visual Studio (2003 for this projecT). Are there tricks or techniques anyone's had success with in tracking down crashing issues when you cannot make the software crash in a debugger session? Stuff like permission issues, for example? The only thing I've thought of is to start plugging in debug/status messages to a logfile, but that's a long, hard way to go. Been there, done that. Any better suggestions? Am I missing some tools that would help? Is VS 2008 better for this kind of thing? Thanks for any guidance. Some very smart people here (you know who you are!). cheers.

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • HTML5 Canvas and Game Programming

    - by LemonBeagle
    I hope this isn't too open ended. I'm wondering if there is a better (more battery-friendly) way of doing this -- I have a small HTML 5 game, drawn in a canvas (let's say 500x500). I have some objects whose positions I update every 50ms or so. My current implementation re-draws the entire canvas every 50ms. I can't imagine that being very good for battery life on mobile platforms. Is there a better way to do this? This must be a common pattern with games. EDIT: as requested, here are some more updates: Right now, the objects are geometric primitives drawn via arcs and lines. I'm not opposed to making these small png/jpg/gif files instead of that'd help out. These are small graphics -- just 15x15 or so. As the game progresses, more and more of the screen changes at a time. However, at the start, the screen changes relatively slowly (the objects randomly moved a few pixels every 50ms).

    Read the article

  • Joomla Site Templates: Architecture Advise

    - by Vincent
    Our client provided us with html templates to turn into a Joomla template, problem is their designs are not Joomla Template friendly where a lot of the html design are not consistent with structure. Currently the only solution we have is applying a template structure pattern that fits the most amount of their design and have seperate joomla templates to take care of the ones that doesn't fit. We have the generic Joomla Template configured with different positions for each div and assign each article to its respective position in the template. Some articles though have menu modules within them so our solution is to split the article into two position and define positions for each menu module. Is this method better than defining module positions within an article content to render menus within an article? Is there a better way of showing articles in specific div positions than having each article be represented by a module to render in a specific div (position) in a template? Right now our current way of rendering an article(s) content to a specific position is to create a module (moduleAsArticle) and define that module a position. Create An Article - Assign A Module To It (moduleAsArticle) - Define that module a position

    Read the article

  • Is there a more concise regular expression to accomplish this task?

    - by mpminnich
    First off, sorry for the lame title, but I couldn't think of a better one. I need to test a password to ensure the following: Passwords must contain at least 3 of the following: upper case letters lower case letters numbers special characters Here's what I've come up with (it works, but I'm wondering if there is a better way to do this): Dim lowerCase As New Regex("[a-z]") Dim upperCase As New Regex("[A-Z]") Dim numbers As New Regex("\d") Dim special As New Regex("[\\\.\+\*\?\^\$\[\]\(\)\|\{\}\/\'\#]") Dim count As Int16 = 0 If Not lowerCase.IsMatch(txtUpdatepass.Text) Then count += 1 End If If Not upperCase.IsMatch(txtUpdatepass.Text) Then count += 1 End If If Not numbers.IsMatch(txtUpdatepass.Text) Then count += 1 End If If Not special.IsMatch(txtUpdatepass.Text) Then count += 1 End If If at least 3 of the criteria have not been met, I handle it. I'm not well versed in regular expressions and have been reading numerous tutorials on the web. Is there a way to combine all 4 regexes into one? But I guess doing that would not allow me to check if at least 3 of the criteria are met. On a side note, is there a site that has an exhaustive list of all characters that would need to be escaped in the regex (those that have special meaning - eg. $, ^, etc.)? As always, TIA. I can't express enough how awesome I think this site is.

    Read the article

  • Sharing class member data between sub components

    - by Tim Gradwell
    I have an aggregate 'main' class which contains some data which I wish to share. The main class also has other class members. I want to share the data with these other class members. What is the correct / neatest way to do this? The specific example I have is as follows. The main class is a .net Form. I have some controls (actually controls within controls) on the main form which need access to the shared data. Main Form - DataX - DataY - Control1 -- Subcontrol1 - Control2 -- SubControl2 SubControls 1 and 2 both wish to access DataX and DataY. The trouble is, I feel like better practice (to reduce coupling), would be that either subcontrols should not know about Main Form, or Main Form should not know about subcontrols - probably the former. For subcontrols not to know about Main Form, would probably mean Main Form passing references to both Controls 1 and 2, which in turn would pass the references on to SubControls 1 and 2. Lots of lines of code which just forward the references. If I later added DataZ and DataW, and Controls 3 and 4 and SubControls 3 and 4, I'd have to add lots more reference forwarding code. It seems simpler to me to give SubControls 1 and 2 member references to Main Form. That way, any sub control could just ask for MainForm.DataX or MainForm.DataY and if I ever added new data, I could just access it directly from the sub controls with no hassle. But it still involves setting the 'MainForm' member references every time I add a new Control or Subcontrol. And it gives me a gut feeling of 'wrong'. As you might be able to tell I'm not happy with either of my solutions. Is there a better way? Thanks

    Read the article

  • What is the best way to add two strings together?

    - by Pim Jager
    I read somewehere (I thought on codinghorror) that it is bad practice to add strings together as if they are numbers, since like numbers, strings cannot be changed. Thus, adding them together creates a new string. So, I was wondering, what is the best way to add two strings together, when focusing on performance? Which of these four is better, or is there another way which is better? //Note that normally at least one of these two strings is variable $str1 = 'Hello '; $str2 = 'World!'; $output1 = $str1.$str2; //This is said to be bad $str1 = 'Hello '; $output2 = $str1.'World!'; //Also bad $str1 = 'Hello'; $str2 = 'World!'; $output3 = sprintf('%s %s', $str1, $str2); //Good? //This last one is probaply more common as: //$output = sprintf('%s %s', 'Hello', 'World!'); $str1 = 'Hello '; $str2 = '{a}World!'; $output4 = str_replace('{a}', $str1, $str2); Does it even matter?

    Read the article

  • What is the best way to embed SQL in VB.NET.

    - by Amy P
    I am looking for information on the best practices or project layout for software that uses SQL embedded inside VB.NET or C#. The software will connect to a full SQL DB. The software was written in VB6 and ported to VB.NET, we want to update it to use .NET functionality but I am not sure where to start with my research. We are using Visual Studio 2005. All database manipulations are done from VB. Update: To clarify. We are currently using SqlConnection, SqlDataAdapter, SqlDataReader to connect to the database. What I mean by embed is that the SQL stored procedures are scripted inside our VB code and then run on the db. All of our tables, stored procs, views, etc are all manipulated in the VB code. The layout of our code is quite messy. I am looking for a better architecture or pattern that we can use to organize our code. Can you recommend any books, webpages, topics that I can google, etc to help me better understand the best way for us to do this.

    Read the article

  • (PHP) How do I get XMLReader behave like SimpleXML with Xpath? (Large directory like XML file)

    - by AESM
    So, I've got this huge XML file (10MB and up) that I want to parse and I figured instead of using SimpleXML, I'd better use XMLReader. Since the performance should be way better, right? But since XMLReader doesn't work with XPath... The xml is like this: <root name="bookmarks"> * <dir name="A directory"> <link name="blablabla"> <dir name="Sub directory"> ... </dir> * </dir> * <link name="another link"> </root> With SimpleXML combined with Xpath, I would simple do like: $xml = simplexml_load_file('/xmlFile.xml'); $xml-xpath('/root[@name]/dir[@name="A directory"]/dir[@name="Sub directory"]'); Which is so simple. But how do I do this with just using XMLReader? Ps. I'm converting the resulting node to DOM/SimpleXML to get its inner contents. Like this: $node = $xr-expand(); $dom = new DomDocument(); $n = $dom-importNode($node, true); $dom-appendChild($n); $selectedRoot = simplexml_import_dom($dom); Is this ok? ... Thanks!

    Read the article

  • Discover intended Foreign Keys from JOINS in scripts

    - by Jason
    I'm inheriting a database that has 400 tables and only 150 foreign key constraints registered. Knowing what I do about the application and looking at the table columns, it's easy to say that there ought to be a lot more. I'm afraid that the current application software will break if I started adding the missing FKs because the developers have probably come to rely on this "freedom", but step one in fixing the problem is to come up with the list of missing FKs so we can evaluate them as a team. To make matters worse, the referencing columns don't share a naming convention. The relationships ARE coded informally into the hundreds of ad-hoc queries and stored procedures, so my hope is to parse these files programmatically looking for JOINS between actual tables (but not table variables, etc). Challenges I foresee in this approach are: newlines, optional aliases and table hints, alias resolution. Any better ideas? (Besides quitting) Are there any pre-built tools that can solve this? I don't think regex can handle this. Do you disagree? SQL Parsers? I tried using Microsoft.SqlServer.Management.SqlParser.Parser but all that is exposed is the lexer - can't get an AST out of it - all that stuff is internal.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >