Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 442/688 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • exec problem in sql 2005

    - by IordanTanev
    Hi, i have the situation where i have two databases whith same structure. The first have some data in its datatables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR Select TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when i run th script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When i copy this and run it from Management studio it works fine. But when the script trys to run it with exec i get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Javascript/ajax/php question: sending from server to client works, sending from client to server fai

    - by Jeroen Willemsen
    Hey All, Sorry for reposting(Admins, please delete the other one!). since you guys have been a great help, I was kinda hoping that you could help me once again while having the following question: I am currently trying to work with AJAX by allowing a managerclass in PHP to communicate via an XmlHttpobject with the javascript on the clientside. However, I can send something to the client via JSON, but I cannot read it at the clientside. In fact I am getting the error that the "time" is an undefined index in Session. So I was wondering: what am I doing wrong? The javascriptcode for Ajax: <script type="text/javascript"> var sendReq = GetXmlHttpObject(); var receiveReq = GetXmlHttpObject(); var JSONIn = 0; var JSONOut= 0; //var mTimer; //function to retreive xmlHTTp object for AJAX calls (correct) function GetXmlHttpObject() { var xmlHttp=null; try { // Firefox, Opera 8.0+, Safari xmlHttp=new XMLHttpRequest(); } catch (e) { // Internet Explorer try { xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } } return xmlHttp; } //Gets the new info from the server function getUpdate() { if (receiveReq.readyState == 4 || receiveReq.readyState == 0) { receiveReq.open("GET", "index.php?json="+JSONIn+"&sid=$this->session", true); receiveReq.onreadystatechange = updateState; receiveReq.send(null); } } //send a message to the server. function sendUpdate(JSONstringsend) { JSONOut=JSONstringsend; if (sendReq.readyState == 4 || sendReq.readyState == 0) { sendReq.open("POST", "index.php?json="+JSONstringsend+"&sid=$this->session", true); sendReq.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); alert(JSONstringsend); sendReq.onreadystatechange = updateCycle; sendReq.send(JSONstringsend); } } //When data has been send, update the page. function updateCycle() { getUpdate(); } function updateState() { if (receiveReq.readyState == 4) { // JSONANSWER gets here (correct): var JSONtext = sendReq.responseText; // convert received string to JavaScript object (correct) alert(JSONtext); var JSONobject = JSON.parse(JSONtext); // updates date from the JSONanswer (correct): document.getElementById("dateview").innerHTML= JSONobject.date; } //mTimer = setTimeout('getUpdate();',2000); //Refresh our chat in 2 seconds } </script> The function that actually uses the ajax code: //datepickerdata $(document).ready(function(){ $("#datepicker").datepicker({ onSelect: function(dateText){ var JSONObject = {"date": dateText}; var JSONstring = JSON.stringify(JSONObject); sendUpdate(JSONstring); }, dateFormat: 'dd-mm-yy' }); }); </script> And the PHP code: private function handleReceivedJSon($json){ $this->jsonLocal=array(); $json=$_POST["json"]; $this->jsonDecoded= json_decode($json, true); if(isset($this->jsonDecoded["date"])){ $_SESSION["date"]=$this->jsonDecoded["date"]; $this->useddate=$this->jsonDecoded; } if(isset($this->jsonDecoded["logout"])){ session_destroy(); exit("logout"); } header("Last-Modified: " . gmdate( "D, d M Y H:i:s" ) . "GMT" ); header("Cache-Control: no-cache, must-revalidate" ); header("Pragma: no-cache" ); header("Content-Type: text/xml; charset=utf-8"); exit($json); }

    Read the article

  • Mono-LibreOffice System.TypeLoadException

    - by Marco
    In the past I wrote a C# library to work with OpenOffice and this worked fine both in Windows than under Ubuntu with Mono. Part of this library is published here as accepted answer. In these days I discovered that Ubuntu decided to move to LibreOffice, so I tried my library with LibreOffice latest stable release. While under Windows it's working perfectly, under Linux I receive this error: Unhandled Exception: System.TypeLoadException: A type load exception has occurred. [ERROR] FATAL UNHANDLED EXCEPTION: System.TypeLoadException: A type load exception has occurred. Usually Mono tells us which library can't load, so I can install correct package and everything is OK, but in this case I really don't know what's going bad. I'm using Ubuntu oneiric and my library is compiled with Framework 4.0. Under Windows I had to write this into app.config: <?xml version="1.0"?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0,Profile=Client"/> </startup> </configuration> because LibreOffice assemblies uses Framework 2.0 (I think). How can I find the reason of this error to solve it? Thanks UPDATE: Even compiling with Framework 2.0 problem (as expected) is the same. Problem (I think) is that Mono is not finding cli-uno-bridge package (installable on previous Ubuntu releases and now marked as superseded), but I cannot be sure. UPDATE 2: I created a test console application referencing cli-uno dlls on Windows (they are registered in GAC_32 and GAC_MSIL). CONSOLE app static void Main(string[] args) { Console.WriteLine("Starting"); string dir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); string doc = Path.Combine(dir, "Liberatoria siti web.docx"); using (QOpenOffice.OpenOffice oo = new QOpenOffice.OpenOffice()) { if (!oo.Init()) return; oo.Load(doc, true); oo.ExportToPdf(Path.ChangeExtension(doc, ".pdf")); } } LIBRARY: using unoidl.com.sun.star.lang; using unoidl.com.sun.star.uno; using unoidl.com.sun.star.container; using unoidl.com.sun.star.frame; using unoidl.com.sun.star.beans; using unoidl.com.sun.star.view; using unoidl.com.sun.star.document; using System.Collections.Generic; using System.IO; using System; namespace QOpenOffice { class OpenOffice : IDisposable { private XComponentContext context; private XMultiServiceFactory service; private XComponentLoader component; private XComponent doc; public bool Init() { Console.WriteLine("Entering Init()"); try { context = uno.util.Bootstrap.bootstrap(); service = (XMultiServiceFactory)context.getServiceManager(); component = (XComponentLoader)service.createInstance("com.sun.star.frame.Desktop"); XNameContainer filters = (XNameContainer)service.createInstance("com.sun.star.document.FilterFactory"); return true; } catch (System.Exception ex) { Console.WriteLine(ex.Message); if (ex.InnerException != null) Console.WriteLine(ex.InnerException.Message); return false; } } } } but I'm not able to see "Starting" !!! If I comment using(...) on application, I see line on console... so I think it's something wrong in DLL. There I'm not able to see "Entering Init()" message on Init(). Behaviour is the same when LibreOffice is not installed and when it is !!! try..catch block is not executed...

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • Ruby script/console and Ruby script/server using two different DBs?

    - by aronchick
    Has anyone seen where script/console and script/server load two different databases (though both report using the same)? Here's the first output $ script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-03-21 15:54:05] INFO WEBrick 1.3.1 [2010-03-21 15:54:05] INFO ruby 1.8.7 (2010-01-10) [i386-mingw32] [2010-03-21 15:54:05] INFO WEBrick::HTTPServer#start: pid=7148 port=3000 No errors. I then run my standard code for entering a form - no problems. Checking the Dev Database (.yml at bottom): mysql> select * from books; [...] | 712 | Book | Book Name | 2010-03-21 22:29:22 | 2010-03-21 22:29:22 | [...] 712 rows in set (0.00 sec) The code CLEARLY saved it seconds ago And now here's the output of script/console: $ script/console Loading development environment (Rails 2.3.5) >> Books.all => [] Nothing. Further, upon further inspection, it's using the production database, but I can't figure out why. Any thoughts here? All consoles have been closed and reopened. UPDATE: Requested .yml file (can't see how it'd be helpful (user name and password are all the same for each)) - development: adapter: mysql database: BooksDBdev username: <user name> password: <long string> timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql database: BooksDBtest username: <user name> password: <long string> timeout: 5000 production: adapter: mysql database: BooksDB username: <user name> password: <long string> timeout: 5000

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • how do i execute a stored procedure with vici coolstorage?

    - by lincolnk
    I'm building an app around Vici Coolstorage (asp.net version). I have my classes created and mapped to my database tables and can pull a list of all records fine. I've written a stored procedure where the query jumps across databases that aren't mapped with Coolstorage, however, the fields in the query result map directly to one of my classes. The procedure takes 1 parameter. so 2 questions here: how do i execute the stored procedure? i'm doing this CSParameterCollection collection = new CSParameterCollection(); collection.Add("@id", id); var result = Vici.CoolStorage.CSDatabase.RunQuery("procedurename", collection); and getting the exception "Incorrect syntax near 'procedurename'." (i'm guessing this is because it's trying to execute it as text rather than a procedure?) and also, since the class representing my table is defined as abstract, how do i specify that result should create a list of MyTable objects instead of generic or dynamic or whatever objects? if i try Vici.CoolStorage.CSDatabase.RunQuery<MyTable>(...) the compiler yells at me for it being an abstract class.

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Multiple database with NHibernate

    - by Flint
    Hi, I have two databases. One from Oracle 10g. Another from Mysql. I have configured my web application with Nhibernate for Oracle and now I am in need of using the MySQL database. So how can i configure the hibernate.cfg.xml so that i can use both of the database at the same application? My current hibernate.cfg.xml is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string">Data Source=xe;Persist Security Info=True;User ID=hr;Password=hr;Unicode=True</property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle9Dialect</property> <!-- mapping files --> <mapping assembly="DataTransfer" /> </session-factory> </hibernate-configuration>

    Read the article

  • Issue with XSL Criteria

    - by Rachel
    I am using the below piece of XSL to select the id of the text nodes whose content has a given index. This index value in input will be relative to a spcified node whose id value is known. The criteria to select the text node is, The text node content should contain a index say 'i' relative to node say 'n' whose id value i know. 'i' and 'id of n' is got as index and nodeName from the input param as seen in the xsl. Node 'd1e5' has the text content whose index ranges from 1 to 33. When i give an index value greater than 33 i want the below criteria to fail but it does not, [sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index] Input xml: <?xml version="1.0" encoding="UTF-8"?> <html xmlns="http://www.w3.org/1999/xhtml" id="d1e1"> <head id="d1e3"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title id="d1e5">Every document must have a title</title> </head> <body id="d1e9"> <h1 id="d1e11" align="center">Very Important Heading</h1> <p id="d1e13">Since this is just a sample, I won't put much text here.</p> </body> </html> XSL code used: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsd="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xsd" version="2.0"> <xsl:param name="insert-file" as="node()+"> <insert-data><data index="1" nodeName="d1e5"></data><data index="34" nodeName="d1e5"></data></insert-data> </xsl:param> <xsl:param name="nodeName" as="xsd:string" /> <xsl:variable name="main-root" as="document-node()" select="/"/> <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:sort select="xsd:integer(@index)"/> <xsl:variable name="index" select="xsd:integer(@index)" /> <xsl:variable name="nodeName" select="@nodeName" /> <data text-id="{generate-id($main-root/descendant::text()[sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index][1])}"> </data> </xsl:for-each> </xsl:variable> <xsl:template match="/"> <Output> <xsl:copy-of select="$insert-data" /> </Output> </xsl:template> </xsl:stylesheet> Actual output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> <data text-id="d1t14"/> </Output> Expected output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> </Output> This solution works fine if index lies between 1 and 33. Any index value greater that 33 causes incorrect text nodes to get selected. I could not understand why the text node 'd1t14' is getting selected. Please share your thoughts.

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • swf listens to XML, AS3

    - by VideoDnd
    My swf listens to XML from a socket and document. How do I get 'my variables' to grab XML from the socket instead of the XML document? PURPOSE My purpose is to control variables with a XML Socket Server. I hope my question is clear, but ask if there's any questions. EXAMPLE Flash File import flash.net.*; import flash.display.*; import flash.events.*; import flash.system.Security; import flash.utils.Timer; import flash.events.TimerEvent; //MY VARIABLES, LINE 8-12 var timer:Timer = new Timer(10); var myString:String = ""; var count:int = 0; var myStg:String = ""; var fcount:int = 0; var xml_s=new XMLSocket(); xml_s.addEventListener(Event.CONNECT, socket_event_catcher);//OnConnect// xml_s.addEventListener(Event.CLOSE, socket_event_catcher);//OnDisconnect// xml_s.addEventListener(IOErrorEvent.IO_ERROR, socket_event_catcher);//Unable To Connect// xml_s.addEventListener(DataEvent.DATA, socket_event_catcher);//OnDisconnect// xml_s.connect("localhost", 1999); function socket_event_catcher(Event):void { switch (Event.type) { case 'ioError' : trace("ioError: " + Event.text);//Unable to Connect :(// break; case 'connect' : trace("Connection Established!");//Connected :)// break; case 'data' : trace("Received Data: " + Event.data); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var myLoader:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(myLoader); //trace("file to load URL: " + urlReq); break; case 'close' : trace("Connection Closed!");//OnDisconnect :( // xml_s.close(); break; } } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var DOINK:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(DOINK); //trace("file to load URL: " + urlReq); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.COUNT.text()); //-77777 //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); trace("String: ", myString); trace("Int: ", count); trace(count - 1); //just to show you that it's a number that you can do math with (-77778) //TEXT var text:TextField = new TextField(); text.text = myString; addChild(text); } Ruby Server 'Snippet' msg1 = {"msg" => {"head" => {"type" => "frctl", "seq_no" => seq_no, "version" => 1.0}, "SESSION" => {"text" => "88888", "timer" => -1000, "count" => 10, "fcount" => "10"}}} XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> <COUNT TITLE="starting position">88888</COUNT> <FCOUNT TITLE="ramp">1000</FCOUNT> </SESSION> ENVIROMENT AS3 Ruby 186-25 PROBLEM Coding errors "coercion of value"

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Difficulty in Understanding Slideshow script

    - by shining star
    I have taken slide show script from net. But There some functions i cannot understand here is script <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd" > <html lang="en"> <head> <title></title> <script> var interval = 1500; var random_display = 0; var imageDir = "my_images/"; var imageNum = 0; imageArray = new Array(); imageArray[imageNum++] = new imageItem(imageDir + "01.jpg"); imageArray[imageNum++] = new imageItem(imageDir + "02.jpg"); imageArray[imageNum++] = new imageItem(imageDir + "03.jpg"); imageArray[imageNum++] = new imageItem(imageDir + "04.jpg"); imageArray[imageNum++] = new imageItem(imageDir + "05.jpg"); var totalImages = imageArray.length; function imageItem(image_location) { this.image_item = new Image(); this.image_item.src = image_location; return this.image_item.src; } function get_ImageItemLocation(imageObj) { return(imageObj.image_item.src) } alert(imageArray[imageNum].image_item.src); function randNum(x, y) { var range = y - x + 1; return Math.floor(Math.random() * range) + x; } function getNextImage() { if (random_display) { imageNum = randNum(0, totalImages-1); } else { imageNum = (imageNum+1) % totalImages; } var new_image = get_ImageItemLocation(imageArray[imageNum]); //alert(new_image) return(new_image); } function getPrevImage() { imageNum = (imageNum-1) % totalImages; var new_image = get_ImageItemLocation(imageArray[imageNum]); return(new_image); } function prevImage(place) { var new_image = getPrevImage(); document[place].src = new_image; } function switchImage(place) { var new_image = getNextImage(); document[place].src = new_image; var recur_call = "switchImage('"+place+"')"; timerID = setTimeout(recur_call, interval); } </script> </head> <body onLoad="switchImage('slideImg')"> <img name="slideImg" src="27.jpg" width=500 height=375 border=0> <a href="#" onClick="switchImage('slideImg')">play slide show</a> <a href="#" onClick="clearTimeout(timerID)"> pause</a> <a href="#" onClick="prevImage('slideImg'); clearTimeout(timerID)"> previous</a> <a href="#" onClick="switchImage('slideImg'); clearTimeout(timerID)">next </a> </body> </html> here exactly i dont know what does acctually function of get_ImageItemLocation(imageObj) and imageItem(image_location) what does these two functions seperately? Thanks in advance for attention

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >