Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 443/688 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • Selecting unique records in XSLT/XPath

    - by Daniel I-S
    I have to select only unique records from an XML document, in the context of an <xsl:for-each> loop. I am limited by Visual Studio to using XSL 1.0. <availList> <item> <schDate>2010-06-24</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>13:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-24</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>13:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-25</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>12:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-26</schDate> <schFrmTime>13:00:00</schFrmTime> <schToTime>14:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-26</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>12:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> </availList> The uniqueness must be based on the value of the three child elements: schDate, schFrmTime and schToTime. If two item elements have the same values for all three child elements, they are duplicates. In the above XML, items one and two are duplicates. The rest are unique. As indicated above, each item contains other elements that we do not wish to include in the comparison. 'Uniqueness' should be a factor of those three elements, and those alone. I have attempted to accomplish this through the following: availList/item[not(schDate = preceding:: schDate and schFrmTime = preceding:: schFrmTime and schToTime = preceding:: schToTime)] The idea behind this is to select records where there is no preceding element with the same schDate, schFrmTime and schToTime. However, its output is missing the last item. This is because my XPath is actually excluding items where all of the child element values are matched within the entire preceding document. No single item matches all of the last item's child elements - but because each element's value is individually present in another item, the last item gets excluded. I could get the correct result by comparing all child values as a concatenated string to the same concatenated values for each preceding item. Does anybody know of a way I could do this?

    Read the article

  • how do i execute a stored procedure with vici coolstorage?

    - by lincolnk
    I'm building an app around Vici Coolstorage (asp.net version). I have my classes created and mapped to my database tables and can pull a list of all records fine. I've written a stored procedure where the query jumps across databases that aren't mapped with Coolstorage, however, the fields in the query result map directly to one of my classes. The procedure takes 1 parameter. so 2 questions here: how do i execute the stored procedure? i'm doing this CSParameterCollection collection = new CSParameterCollection(); collection.Add("@id", id); var result = Vici.CoolStorage.CSDatabase.RunQuery("procedurename", collection); and getting the exception "Incorrect syntax near 'procedurename'." (i'm guessing this is because it's trying to execute it as text rather than a procedure?) and also, since the class representing my table is defined as abstract, how do i specify that result should create a list of MyTable objects instead of generic or dynamic or whatever objects? if i try Vici.CoolStorage.CSDatabase.RunQuery<MyTable>(...) the compiler yells at me for it being an abstract class.

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Prevent lazy loading in nHibernate

    - by Ciaran
    Hi, I'm storing some blobs in my database, so I have a Document table and a DocumentContent table. Document contains a filename, description etc and has a DocumentContent property. I have a Silverlight client, so I don't want to load up and send the DocumentContent to the client unless I explicity ask for it, but I'm having trouble doing this. I've read the blog post by Davy Brion. I have tried placing lazy=false in my config and removing the virtual access modifier but have had no luck with it as yet. Every time I do a Session.Get(id), the DocumentContent is retrieved via an outer join. I only want this property to be populated when I explicity join onto this table and ask for it. Any help is appreciated. My NHibernate mapping is as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Jrm.Model" namespace="Jrm.Model"> <class name="JrmDocument" lazy="false"> <id name="JrmDocumentID"> <generator class="native" /> </id> <property name="FileName"/> <property name="Description"/> <many-to-one name="DocumentContent" class="JrmDocumentContent" unique="true" column="JrmDocumentContentID" lazy="false"/> </class> <class name="JrmDocumentContent" lazy="false"> <id name="JrmDocumentContentID"> <generator class="native" /> </id> <property name="Content" type="BinaryBlob" lazy="false"> <column name="FileBytes" sql-type="varbinary(max)"/> </property> </class> </hibernate-mapping> and my classes are: [DataContract] public class JrmDocument : ModelBase { private int jrmDocumentID; private JrmDocumentContent documentContent; private long maxFileSize; private string fileName; private string description; public JrmDocument() { } public JrmDocument(string fileName, long maxFileSize) { DocumentContent = new JrmDocumentContent(File.ReadAllBytes(fileName)); FileName = new FileInfo(fileName).Name; } [DataMember] public virtual int JrmDocumentID { get { return jrmDocumentID; } set { jrmDocumentID = value; OnPropertyChanged("JrmDocumentID"); } } [DataMember] public JrmDocumentContent DocumentContent { get { return documentContent; } set { documentContent = value; OnPropertyChanged("DocumentContent"); } } [DataMember] public virtual long MaxFileSize { get { return maxFileSize; } set { maxFileSize = value; OnPropertyChanged("MaxFileSize"); } } [DataMember] public virtual string FileName { get { return fileName; } set { fileName = value; OnPropertyChanged("FileName"); } } [DataMember] public virtual string Description { get { return description; } set { description = value; OnPropertyChanged("Description"); } } } [DataContract] public class JrmDocumentContent : ModelBase { private int jrmDocumentContentID; private byte[] content; public JrmDocumentContent() { } public JrmDocumentContent(byte[] bytes) { Content = bytes; } [DataMember] public int JrmDocumentContentID { get { return jrmDocumentContentID; } set { jrmDocumentContentID = value; OnPropertyChanged("JrmDocumentContentID"); } } [DataMember] public byte[] Content { get { return content; } set { content = value; OnPropertyChanged("Content"); } } }

    Read the article

  • PHP: Writing non-english characters to XML - encoding problem

    - by Dean
    Hello, I wrote a small PHP script to edit the site news XML file. I used DOM to manipulate the XML (Loading, writing, editing). It works fine when writing English characters, but when non-English characters are written, PHP throws an error when trying to load the file. If I manually type non-English characters into the file - it's loaded perfectly fine, but if PHP writes the non-English characters the encoding goes wrong, although I specified the utf-8 encoding. Any help is appreciated. Errors: Warning: DOMDocument::load() [domdocument.load]: Entity 'times' not defined in filepath Warning: DOMDocument::load() [domdocument.load]: Input is not proper UTF-8, indicate encoding ! Bytes: 0x91 0x26 0x74 0x69 in filepath Here are the functions responsible for loading and saving the file (self-explanatory): function get_tags_from_xml(){ // Load news entries from XML file for display $errors = Array(); if(!$xml_file = load_news_file()){ // Load file // String indicates error presence $errors = "file not found"; return $errors; } $taglist = $xml_file->getElementsByTagName("text"); return $taglist; } function set_news_lang(){ // Sets the news language global $news_lang; if($_POST["news-lang"]){ $news_lang = htmlentities($_POST["news-lang"]); } elseif($_GET["news-lang"]){ $news_lang = htmlentities($_GET["news-lang"]); } else{ $news_lang = "he"; } } function load_news_file(){ // Load XML news file for proccessing, depending on language global $news_lang; $doc = new DOMDocument('1.0','utf-8'); // Create new XML document $doc->load("news_{$news_lang}.xml"); // Load news file by language $doc->formatOutput = true; // Nicely format the file return $doc; } function save_news_file($doc){ // Save XML news file, depending on language global $news_lang; $doc->saveXML($doc->documentElement); $doc->save("news_{$news_lang}.xml"); } Here is the code for writing to XML (add news): <?php ob_start()?> <?php include("include/xml_functions.php")?> <?php include("../include/functions.php")?> <?php get_lang();?> <?php //TODO: ADD USER AUTHENTICATION! if(isset($_POST["news"]) && isset($_POST["news-lang"])){ set_news_lang(); $news = htmlentities($_POST["news"]); $xml_doc = load_news_file(); $news_list = $xml_doc->getElementsByTagName("text"); // Get all existing news from file $doc_root_element = $xml_doc->getElementsByTagName("news")->item(0); // Get the root element of the new XML document $new_news_entry = $xml_doc->createElement("text",$news); // Create the submited news entry $doc_root_element->appendChild($new_news_entry); // Append submited news entry $xml_doc->appendChild($doc_root_element); save_news_file($xml_doc); header("Location: /cpanel/index.php?lang={$lang}&news-lang={$news_lang}"); } else{ header("Location: /cpanel/index.php?lang={$lang}&news-lang={$news_lang}"); } ?> <?php ob_end_flush()?>

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Beyond core java

    - by Paul
    Coming to the end of the first year of my CS degree, we've done some Java but just the core stuff; manipulating strings and arrays, inheritance, implementing logic etc. I visit this website daily and I see so much stuff that is beyond me; using frameworks; managing databases etc. It makes me feel like I've just learned the syntax of Java, and there is so much more to do with it. My question is though, how do I get there? I don't think I'm advanced enough to join an open source project, which seems to be suggested often (though I'd love to) and I've looked at other similar questions on here (like this one) but even then I don't think that'd work for me.. firstly, could somebody try give me some commonly used frameworks etc and how and what they are used for? Where would be a good place to start? How did you get started in using the things you do? Or perhaps you think I'm going down the wrong route. Should I learn another language, and just wait until the moment occurs where it's clear what I should be doing? I'm only in the first year of my degree, so far we've lightly covered Haskell and Java, and I've done a little HTML and CSS in my free time. I know that next year we cover python, so perhaps I should just wait till then and see if I prefer that? I feel like I also risk learning something in depth and then never using it... I suppose I'm also asking for personal experiences; was there a point where you felt you'd exceeded the basic grasp of a language (does not necessarily have to be Java related) and reach a more "advanced" level? I guess I'll put subjective tag on this, but really I just want to know how to get beyond the basic understanding of a language.

    Read the article

  • Parsing xml file that comes in as one object per line

    - by Casey
    I haven't been here in so long, I forgot my prior account! Anyways, I am working on parsing an xml document that comes in ugly. It is for banking statements. Each line is a <statement>all tags</statement>. Now, what I need to do is read this file in, and parse the XML document at the same time, while formatting it more human readable too. Point beeing, Original input looks like this: <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> I need the final output to be as follows: <statement> <name></name> <address></address> </statement> This is fine and dandy. I am using the following "very slow considering 5.1 million lines, 254k data file, and about 60k statements takes around 8 minutes". foreach(String item in lines) { XElement xElement = XElement.Parse(item); sr.WriteLine(xElement.ToString().Trim()); } Then when the file is formatted this is what sucks. I need to check every single tag in transaction elements, and if a tag is missing that could be there, I have to fill it in. Our designer software will default prior values in if a tag is possible, and the current objects does not have. It defaults in the value of a prior one that was not Null. "I know, and they swear up and down it is not a bug... ok?" So, that is also taking about 5 to 10 minutes. I need to break all this down, and find a faster method for working with the initial XML. This is a preprocess action, and cannot take that long if not necessary. It just seems redundant. Is there a better way to parse the XML, or is this the best I can do? I parse the XML, write to a temp file, and then read that file in, to the output file inserting the missing tags. 2 IO runs for one process. Yuck.

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • swf listens to XML, AS3

    - by VideoDnd
    My swf listens to XML from a socket and document. How do I get 'my variables' to grab XML from the socket instead of the XML document? PURPOSE My purpose is to control variables with a XML Socket Server. I hope my question is clear, but ask if there's any questions. EXAMPLE Flash File import flash.net.*; import flash.display.*; import flash.events.*; import flash.system.Security; import flash.utils.Timer; import flash.events.TimerEvent; //MY VARIABLES, LINE 8-12 var timer:Timer = new Timer(10); var myString:String = ""; var count:int = 0; var myStg:String = ""; var fcount:int = 0; var xml_s=new XMLSocket(); xml_s.addEventListener(Event.CONNECT, socket_event_catcher);//OnConnect// xml_s.addEventListener(Event.CLOSE, socket_event_catcher);//OnDisconnect// xml_s.addEventListener(IOErrorEvent.IO_ERROR, socket_event_catcher);//Unable To Connect// xml_s.addEventListener(DataEvent.DATA, socket_event_catcher);//OnDisconnect// xml_s.connect("localhost", 1999); function socket_event_catcher(Event):void { switch (Event.type) { case 'ioError' : trace("ioError: " + Event.text);//Unable to Connect :(// break; case 'connect' : trace("Connection Established!");//Connected :)// break; case 'data' : trace("Received Data: " + Event.data); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var myLoader:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(myLoader); //trace("file to load URL: " + urlReq); break; case 'close' : trace("Connection Closed!");//OnDisconnect :( // xml_s.close(); break; } } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var DOINK:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(DOINK); //trace("file to load URL: " + urlReq); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.COUNT.text()); //-77777 //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); trace("String: ", myString); trace("Int: ", count); trace(count - 1); //just to show you that it's a number that you can do math with (-77778) //TEXT var text:TextField = new TextField(); text.text = myString; addChild(text); } Ruby Server 'Snippet' msg1 = {"msg" => {"head" => {"type" => "frctl", "seq_no" => seq_no, "version" => 1.0}, "SESSION" => {"text" => "88888", "timer" => -1000, "count" => 10, "fcount" => "10"}}} XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> <COUNT TITLE="starting position">88888</COUNT> <FCOUNT TITLE="ramp">1000</FCOUNT> </SESSION> ENVIROMENT AS3 Ruby 186-25 PROBLEM Coding errors "coercion of value"

    Read the article

  • Add options to select box without Internet Explorer closing the box?

    - by Paul Colby
    Hi, I'm trying to build a web page with a number of drop-down select boxes that load their options asynchronously when the box is first opened. This works very well under Firefox, but not under Internet Explorer. Below is a small example of what I'm trying to achieve. Basically, there is a select box (with the id "selectBox"), which contains just one option ("Any"). Then there is an onmousedown handler that loads the other options when the box is clicked. <html> <head> <script type="text/javascript"> function appendOption(select,option) { try { selectBox.add(option,null); // Standards compliant. } catch (e) { selectBox.add(option); // IE only version. } } function loadOptions() { // Simulate an AJAX request that will call the // loadOptionsCallback function after 500ms. setTimeout(loadOptionsCallback,500); } function loadOptionsCallback() { var selectBox = document.getElementById('selectBox'); var option = document.createElement('option'); option.text = 'new option'; appendOption(selectBox,option); } </script> </head> <body> <select id="selectBox" onmousedown="loadOptions();"> <option>Any</option> </select> </body> </html> The desired behavior (which Firefox does) is: the user see's a closed select box containing "Any". the user clicks on the select box. the select box opens to reveal the one and only option ("Any"). 500ms later (or when the AJAX call has returned) the dropped-down list expands to include the new options (hard coded to 'new option' in this example). So that's exactly what Firefox does, which is great. However, in Internet Explorer, as soon as the new option is added in "4" the browser closes the select box. The select box does contain the correct options, but the box is closed, requiring the user to click to re-open it. So, does anyone have any suggestions for how I can load the select control's options asynchronously without IE closing the drop-down box? I know that I can load the list before the box is even clicked, but the real form I'm developing contains many such select boxes, which are all interrelated, so it will be much better for both the client and server if I can load each set of options only when needed. Also, if the results are loaded synchronously, before the select box's onmousedown handler completes, then IE will show the full list as expected - however, synchronous loading is a bad idea here, since it will completely "lock" the browser while the network requests are taking place. Finally, I've also tried using IE's click() method to open the select box once the new options have been added, but that does not re-open the select box. Any ideas or suggestions would be really appreciated!! :) Thanks! Paul.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • Help to restructure my Doc/View more correctly

    - by Harvey
    Edited by OP. My program is in need of a lot of cleanup and restructuring. In another post I asked about leaving the MFC DocView framework and going to the WinProc & Message Loop way (what is that called for short?). Well at present I am thinking that I should clean up what I have in Doc View and perhaps later convert to non-MFC it that even makes sense. My Document class currently has almost nothing useful in it. I think a place to start is the InitInstance() function (posted below). In this part: POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); This seem strange to me. I only have one doc and one view. I feel like I am going about it backwards with GetNextDoc() and GetNextView(). To try to use a silly analogy; it's like I have a book in my hand but I have to look up in it's index to find out what page the Title of the book is on. I'm tired of feeling embarrassed about my code. I either need correction or reassurance, or both. :) Also, all the miscellaneous items are in no particular order. I would like to rearrange them into an order that may be more standard, structured or straightforward. ALL suggestions welcome! BOOL CLCWApp::InitInstance() { InitCommonControls(); if(!AfxOleInit()) return FALSE; // Initialize the Toolbar dll. (Toolbar code by Nikolay Denisov.) InitGuiLibDLL(); // NOTE: insert GuiLib.dll into the resource chain SetRegistryKey(_T("Real Name Removed")); // Register document templates CSingleDocTemplate* pDocTemplate; pDocTemplate = new CSingleDocTemplate( IDR_MAINFRAME, RUNTIME_CLASS(CLCWDoc), RUNTIME_CLASS(CMainFrame), RUNTIME_CLASS(CChildView)); AddDocTemplate(pDocTemplate); // Parse command line for standard shell commands, DDE, file open CCmdLineInfo cmdInfo; ParseCommandLine(cmdInfo); // Dispatch commands specified on the command line // The window frame appears on the screen in here. if (!ProcessShellCommand(cmdInfo)) { AfxMessageBox("Failure processing Command Line"); return FALSE; } POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); if(!cmdInfo.m_Fn1.IsEmpty() && !cmdInfo.m_Fn2.IsEmpty()) { pCV->OpenF1(cmdInfo.m_Fn1); pCV->OpenF2(cmdInfo.m_Fn2); pCV->DoCompare(); // Sends a paint message when complete } // enable file manager drag/drop and DDE Execute open m_pMainWnd->DragAcceptFiles(TRUE); m_pMainWnd->ShowWindow(SW_SHOWNORMAL); m_pMainWnd->UpdateWindow(); // paints the window background pCV->bDoSize=true; //Prevent a dozen useless size calculations return TRUE; } Thanks

    Read the article

  • What ORM for .NET should I use?

    - by eKek0
    I'm relatively new to .NET and have being using Linq2Sql for a almost a year, but it lacks some of the features I'm looking for now. I'm going to start a new project in which I want to use an ORM with the following characteristics: It has to be very productive, I don't want to be dealing with the access layer to save or retrieve objects from or to the database, but it should allows me to easily tweak any object before actually commit it to the database; also it should allows me to work easily with a changing database schema It should allows me to extend the objects mapped from the database, for example to add virtual attributes to them (virtual columns to a table) It has to be (at least almost) database agnostic, it should allows me to work with different databases in a transparent way It has to have not so much configuration or must be based on conventions to make it work It should allows me to work with Linq So, do you know any ORM that I could use? Thank you for your help. EDIT I know that an option is to use NHibernate. This appears as the facto standard for enterprise level applications, but also it seems that is not very productive because its deep learning curve. In other way, I have read in some other post here in SO that it doesn't integrate well with Linq. Is all of that true?

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • Change default markers for directions on google maps

    - by Elaine Marley
    I'm a complete noob with google maps api and I started with a given script that I'm editing to what I need to do. In this case I have a map with some points in it that come from a database. They are like this (after I get the lat/lng from the database): var route1 = 'from: 37.496764,-5.913379 to: 37.392587,-6.00023'; var route2 = 'from: 37.392587,-6.00023 to: 37.376964,-5.990838'; routes = [route1, route2]; Then my script does the following: for(var j = 0; j < routes.length; j++) { callGDirections(j); document.getElementById("dbg").innerHTML += "called "+j+"<br>"; } And then the directions: function callGDirections(num) { directionsArray[num] = new GDirections(); GEvent.addListener(directionsArray[num], "load", function() { document.getElementById("dbg").innerHTML += "loaded "+num+"<br>"; var polyline = directionsArray[num].getPolyline(); polyline.setStrokeStyle({color:colors[num],weight:3,opacity: 0.7}); map.addOverlay(polyline); bounds.extend(polyline.getBounds().getSouthWest()); bounds.extend(polyline.getBounds().getNorthEast()); map.setCenter(bounds.getCenter(),map.getBoundsZoomLevel(bounds)); }); // === catch Directions errors === GEvent.addListener(directionsArray[num], "error", function() { var code = directionsArray[num].getStatus().code; var reason="Code "+code; if (reasons[code]) { reason = reasons[code] } alert("Failed to obtain directions, "+reason); }); directionsArray[num].load(routes[num], {getPolyline:true}); } The thing is, I want to change the A and B markers that I get from google on the map to the ones for each of the points that I'm using (each has it's particular icon in the database) but I don't know how to do this. Furthermore, what would be fantastic but I'm clueless if it's even possible is the following: when I get the directions I get something like this: (a) Street A directions (b) Street B And I want (a) Name of first point directions (b) Name of second point (also from database) I understand that my knowledge of the subject is very lacking and the question might be a bit vague, but I would appreciate any tip pointing me in the right direction. EDIT: Ok, I learned a lot from the google api with this problem but I'm still far from what I need. I learned how to hide the default markers doing this: // Hide the route markers when signaled. GEvent.addListener(directionsArray[num], "addoverlay", hideDirMarkers); // Not using the directions markers so hide them. function hideDirMarkers(){ var numMarkers = directionsArray[num].getNumGeocodes() for (var i = 0; i < numMarkers; i++) { var marker = directionsArray[num].getMarker(i); if (marker != null) marker.hide(); else alert("Marker is null"); } } And now when I create new markers doing this: var point = new GLatLng(lat,lng); var marker = createMarker(point,html); map.addOverlay(marker); They appear but they are not clickable (the popup with the html won't show)

    Read the article

  • use .net webbrowser control by multithread, throw "EXCEPTION code=ACCESS_VIOLATION"

    - by user1507827
    i want to make a console program to monitor a webpage's htmlsourcecode, because some of the page content are created by some javescript, so i have to use webbrowser control. like : View Generated Source (After AJAX/JavaScript) in C# my code is below: public class WebProcessor { public string GeneratedSource; public string URL ; public DateTime beginTime; public DateTime endTime; public object GetGeneratedHTML(object url) { URL = url.ToString(); try { Thread[] t = new Thread[10]; for (int i = 0; i < 10; i++) { t[i] = new Thread(new ThreadStart(WebBrowserThread)); t[i].SetApartmentState(ApartmentState.STA); t[i].Name = "Thread" + i.ToString(); t[i].Start(); //t[i].Join(); } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } return GeneratedSource; } private void WebBrowserThread() { WebBrowser wb = new WebBrowser(); wb.ScriptErrorsSuppressed = true; wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler( wb_DocumentCompleted); while(true ) { beginTime = DateTime.Now; wb.Navigate(URL); while (wb.ReadyState != WebBrowserReadyState.Complete) { Thread.Sleep(new Random().Next(10,100)); Application.DoEvents(); } } //wb.Dispose(); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = (WebBrowser)sender; if (wb.ReadyState == WebBrowserReadyState.Complete) { GeneratedSource= wb.Document.Body.InnerHtml; endTime = DateTime.Now; Console.WriteLine("WebBrowser " + (endTime-beginTime).Milliseconds + Thread.CurrentThread.Name + wb.Document.Title); } } } when it run, after a while (20-50 times), it throw the exception like this EXCEPTION code=ACCESS_VIOLATION (null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(nul l)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(n ull)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null) (null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(nul l)(null)BACKTRACE: 33 stack frames: #0 0x083dba8db0 at MatchExactGetIDsOfNames in mshtml.dll #1 0x0879f9b837 at StrongNameErrorInfo in mscorwks.dll #2 0x0879f9b8e3 at StrongNameErrorInfo in mscorwks.dll #3 0x0879f9b93a at StrongNameErrorInfo in mscorwks.dll #4 0x0879f9b9e0 at StrongNameErrorInfo in mscorwks.dll #5 0x0879f9b677 at StrongNameErrorInfo in mscorwks.dll #6 0x0879f9b785 at StrongNameErrorInfo in mscorwks.dll #7 0x0879f192a8 at InstallCustomModule in mscorwks.dll #8 0x0879f19444 at InstallCustomModule in mscorwks.dll #9 0x0879f194ab at InstallCustomModule in mscorwks.dll #10 0x0879fa6491 at StrongNameErrorInfo in mscorwks.dll #11 0x0879f44bcf at DllGetClassObjectInternal in mscorwks.dll #12 0x089bbafa at in #13 0x087b18cc10 at in System.Windows.Forms.ni.dll #14 0x087b91f4c1 at in System.Windows.Forms.ni.dll #15 0x08d00669 at in #16 0x08792d6e46 at in mscorlib.ni.dll #17 0x08792e02cf at in mscorlib.ni.dll #18 0x08792d6dc4 at in mscorlib.ni.dll #19 0x0879e71b4c at in mscorwks.dll #20 0x0879e896ce at in mscorwks.dll #21 0x0879e96ea9 at CoUninitializeEE in mscorwks.dll #22 0x0879e96edc at CoUninitializeEE in mscorwks.dll #23 0x0879e96efa at CoUninitializeEE in mscorwks.dll #24 0x0879f88357 at GetPrivateContextsPerfCounters in mscorwks.dll #25 0x0879e9cc8f at CoUninitializeEE in mscorwks.dll #26 0x0879e9cc2b at CoUninitializeEE in mscorwks.dll #27 0x0879e9cb51 at CoUninitializeEE in mscorwks.dll #28 0x0879e9ccdd at CoUninitializeEE in mscorwks.dll #29 0x0879f88128 at GetPrivateContextsPerfCounters in mscorwks.dll #30 0x0879f88202 at GetPrivateContextsPerfCounters in mscorwks.dll #31 0x0879f0e255 at InstallCustomModule in mscorwks.dll #32 0x087c80b729 at GetModuleFileNameA in KERNEL32.dll i have try lots of methods to solve the problem, finally, i found that if i thread sleep more millseconds, it will run for a longer time, but the exception is still throw. hope somebody give me the answer of how to slove ... thanks very much !!!

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >