Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 731/1300 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • Unit test insert/update/delete

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with data that actually goes into the database. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without saving the data to DB, I would not for sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality in one test?

    Read the article

  • Got a table of people, who I want to link to each other, many-to-many, with the links being bidirect

    - by dflock
    Imagine you live in very simplified example land - and imagine that you've got a table of people in your MySQL database: create table person ( person_id int, name text ) select * from person; +-------------------------------+ | person_id | name | +-------------------------------+ | 1 | Alice | | 2 | Bob | | 3 | Carol | +-------------------------------+ and these people need to collaborate/work together, so you've got a link table which links one person record to another: create table person__person ( person__person_id int, person_id int, other_person_id int ) This setup means that links between people are uni-directional - i.e. Alice can link to Bob, without Bob linking to Alice and, even worse, Alice can link to Bob and Bob can link to Alice at the same time, in two separate link records. As these links represent working relationships, in the real world they're all two-way mutual relationships. The following are all possible in this setup: select * from person__person; +---------------------+-----------+--------------------+ | person__person_id | person_id | other_person_id | +---------------------+-----------+--------------------+ | 1 | 1 | 2 | | 2 | 2 | 1 | | 3 | 2 | 2 | | 4 | 3 | 1 | +---------------------+-----------+--------------------+ For example, with person__person_id = 4 above, when you view Carol's (person_id = 3) profile, you should see a relationship with Alice (person_id = 1) and when you view Alice's profile, you should see a relationship with Carol, even though the link goes the other way. I realize that I can do union and distinct queries and whatnot to present the relationships as mutual in the UI, but is there a better way? I've got a feeling that there is a better way, one where this issue would neatly melt away by setting up the database properly, but I can't see it. Anyone got a better idea?

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How to remove the error "Cant find PInvoke DLL SQLite.interop.dll"

    - by Shailesh Jaiswal
    I am developing windows mobile application. I am using the SQLlite database. I am using the following code to connect to this database as follows SQLiteConnection cn = new SQLiteConnection(); SQLiteDataReader SQLiteDR; cn.ConnectionString = @"Data Source=F:\CompNetDB.db3"; cn.Open(); SQLiteCommand cmd = new SQLiteCommand(); cmd.CommandText = "select * from CustomerInfo"; cmd.CommandType = CommandType.Text; cmd.Connection = cn; SQLiteDR = cmd.ExecuteReader(); In the above case I am getting the error "Cant find PInvike DLL SQLite.interop.dll". I have added the DLL System.Data.SQLLite from the \SQLite.NET\bin\compactframework this folder. This is the folder which is installed by default when I installed the SQLite. In the same folder there is one DLL file named SQLlite.Interop.66.DLL. When I try to add reference to this dll it is giving error that dll can not be added. Are the two dlls SQLlite.Interop.dll & System.Interop.066.dll same ? In the above code how to solve the error "Cant find PInvoke.SQLite.Interop.dll" Please can you tell whether there is mistake in my code or I am missing something in my application?

    Read the article

  • How to compile ocaml to native code

    - by Indra Ginanjar
    i'm really interested learning ocaml, it fast (they said it could be compiled to native code) and it's functional. So i tried to code something easy like enabling mysql event scheduler. #load "unix.cma";; #directory "+mysql";; #load "mysql.cma";; let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; It work fine on ocaml interpreter, but when i was trying to compile it to native (i'm using ubuntu karmic), neither of these command worked ocamlopt -o mysqleventon mysqleventon.ml unix.cmxa mysql.cmxa ocamlopt -o mysqleventon mysqleventon.ml unix.cma mysql.cma i also tried ocamlc -c mysqleventon.ml unix.cma mysql.cma all of them resulting same message File "mysqleventon.ml", line 1, characters 0-1: Error: Syntax error Then i tried to remove the "# load", so the code goes like this let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; The ocamlopt resulting message File "mysqleventon.ml", line 1, characters 9-28: Error: Unbound value Mysql.quick_connect I hope someone could tell me, where did i'm doing wrong.

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • best way to switch between secure and unsecure connection without bugging the user

    - by Brian Lang
    The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • ASP.NET application using old connection string.

    - by Doug S.
    I am trying to publish a website using ASP.NET MVC3 EF and CODEFIRST with a SQL Server 2008 backend. On my local machine I was using a sql express db for development, but now that I am pushing live, I want to use my hosted production database. The problem is that when I try to run the application, it is still using my local db connection string. I have completely removed the old connection string from my web.config file and am using the <clear /> tag before creating the new connection string. I have also cleaned the solution and rebuilt, but somehow it is still connecting to the old db. What am I missing? This is the new connection string: <connectionStrings> <clear /> <add name="CellularAutomataDBContext" connectionString=" Server=XXX; Database=CellularAutomata; User ID=XXX; Password=XXX; Trusted_Connection=False" providerName="System.Data.SqlClient" /> </connectionStrings>

    Read the article

  • Web form is not updating tables, why?

    - by JPJedi
    I have a web application and on page is an update page to update some profile information. Below is the code I am using to update the table. But I think it is wrong. Does anything stick out? The connection string works cause it is used to read the database to get the profile information, I just removed it due to it containing password/login info for the db. player is the class of properties that contains player information and ds is the dataset, but I would like to update the database itself online... Dim connectionString As String = "" Dim GigsterDBConnection As New System.Data.SqlClient.SqlConnection(connectionString) GigsterDBConnection.Open() Dim updatetoursql As String = "UPDATE PLAYERS SET FIRSTNAME = '" & player.FIRSTNAME & "', LASTNAME = '" & player.LASTNAME & "', ADDRESS = '" & player.ADDRESS & "', CITY = '" & player.CITY & "', ZIP = '" & player.ZIP & "', PHONE = '" & player.PHONE & "', EMAIL = '" & player.EMAIL & "', REFFEREDBY = '" & player.REFEREDBY & "' " updatetoursql = updatetoursql & "PLAYERID = '" & player.PLAYERID & "';" Dim cmd As New System.Data.SqlClient.SqlCommand(updatetoursql, GigsterDBConnection) Dim sqlAdapter As New System.Data.SqlClient.SqlDataAdapter(cmd) sqlAdapter.Update(ds, "PLAYERS") I think the issue is something the 3 last lines of the code. am I doing it right or is their a better way? Thanks

    Read the article

  • SubSonic 3.0 Simple Repository Adding a DateTime Property To An Object

    - by Blounty
    I am trying out SubSonic to see if it is viable to use on production projects. I seem to have stumbled upon an issue whith regards to updating the database with default values (String and DateTime) when a new column is created. If a new property of DateTime or String is added to an object. public class Bug { public int BugId { get; set; } public string Title { get; set; } public string Overview { get; set; } public DateTime TrackedDate { get; set; } public DateTime RemovedDate { get; set; } } When the code to add that type of object to the database is run var repository = new SimpleRepository(SimpleRepositoryOptions.RunMigrations); repository.Add(new Bug() { Title = "A Bug", Overview = "An Overview", TrackedDate = DateTime.Now }); it creates the following sql: UPDATE Bugs SET RemovedDate=''01/01/1900 00:00:00'' For some reason it is adding double 2 single quotes to each end of the string or DateTime. This is causing the following error: System.Data.SqlClient.SqlException - Incorrect syntax near '01' I am connecting to SQL Server 2005 Any help would be appreicated as apart from this issue i am finding SubSonic to be a great product. I have created a screen cast of my error here:

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • Creating an Admin directory in Rails

    - by matsko
    I've been developing the CMS backend for a website for a few weeks now. The idea is to craft everything in the backend first so that it can manage the database and information that will be displayed on the main website. As of now, I currently have all my code setup in the normal rails MVC structure. So the users admin is /users and videos is /videos. My plans are to take the code for this and move it to a /admin directory. So the two controllers above would need to be accessed by /admin/users and /admin/videos. I'm not sure how todo the ruote (adding the /admin as a prefix) nor am I sure about how to manage the logic. What I'm thinking of doing is setting up an additional 'middle' controller that somehow gets nested between the ApplicationControler and the targetted controller when the /admin directory is accessed. This way, any additional flags and overloaded methods can be spawned for the /admin section only (I believe I could use a filter too for this). If that were to work, then the next issue would be separating the views logic (but that would just be renaming folders and so on). Either I do it that way or I have two rails instances that share the MVC code between them (and I guess the database too), but I fear that would cause lots of duplication errors. Any ideas as to how I should go about doing this? Many thanks!

    Read the article

  • Webbased data modelling and management tool

    - by pixeldude
    Is there a web-based tool available, where I am able to... ...define data models (like in a database admin tool) ...fill in data (in custom web forms, not too generic) with basic features like completion ...import data from CSV oder Excel Sheets ...export data to CSV or SQL ...create snapshots of my data models (versions, diff, etc.) ...share my data models ...discuss/collaborate with other people about my data models Well, I can develop something like this in PHP or with Ruby or whatever. But this is such a common task, where the application support could be a lot better. And it would be language and database independent. This would help to maintain data models in different versions and you can maybe share your data models with others, extend it with your team members, etc. There is a website called FreeBase, which allows you to define a data entity model and fill in data, which also has export features, but I need to define my own data model with my own granularity and structure. And it should not be shared in public if I don't want to. How do you solve problems like this yourself?

    Read the article

  • Javascript onunload form submit isn't submitting data

    - by Kevin
    I currently have form that checks if a user has unsubmitted changes when they leave the page with a function called through the onunload event. Here's the function: function saveOnExit() { var answer = confirm("You are about to leave the page. All unsaved work will be lost. Would you like to save now?"); if (answer) { document.main_form.submit(); } } And here's the form: <body onunload="saveOnExit()"> <form name="main_form" id="main_form" method="post" action="submit.php" onsubmit="saveScroll()"> <textarea name="comments"></textarea> <input type="submit" name="submit2" value="Submit!"/> </form> I'm not sure what I'm doing wrong here. The data gets submitted and saved in my database if I just press the submit button for the form. However, trying to submit the form through the onunload event doesn't result in anything being stored, from what I can tell. I've tried adding onclick alerts to the submitt button and onsubmit alerts to the form elements and I can verify that the submit button is being triggered and that the form does get submitted. However, nothing gets passed stored in the database. Any ideas as to what I'm doing wrong? Thanks.

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • em.persist seems doesn't persist data on postgreSQL db

    - by Mario
    I've got a simple java main which must write bean data on a PostgreSQL database. I use Entity manager to persist or update object. I use hibernate and toplink driver connection which are specified in persistence.xml file. When I call em.persist(obj), nothing is saved on database, I don't know why. here is my simple code: private static void importa(FileReader f) throws IOException { EntityManagerFactory emf = Persistence .createEntityManagerFactory("orpt2"); EntityManager em = emf.createEntityManager(); dispositivoMedico = new DispositivoMedico(); dispositivoMedico.setCategoria("prova"); dispositivoMedico.setCodice("323"); em.persist(dispositivoMedico); And here is my persistence.xml http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" it.ariadne.orpt2.entities.AccessoriScheda it.ariadne.orpt2.entities.CampiSchede it.ariadne.orpt2.entities.CampiSchedeSalvati it.ariadne.orpt2.entities.CampoAggiuntivo it.ariadne.orpt2.entities.Categorie it.ariadne.orpt2.entities.CategorieCampi it.ariadne.orpt2.entities.CategorieCampiPK it.ariadne.orpt2.entities.ClasseCivab it.ariadne.orpt2.entities.DecodificaStato it.ariadne.orpt2.entities.DispositivoMedico it.ariadne.orpt2.entities.Ente it.ariadne.orpt2.entities.FormaNegoziazione it.ariadne.orpt2.entities.Fornitore it.ariadne.orpt2.entities.LogSession it.ariadne.orpt2.entities.Modello it.ariadne.orpt2.entities.Periodicita it.ariadne.orpt2.entities.Produttore it.ariadne.orpt2.entities.Ruolo it.ariadne.orpt2.entities.RuoloPK it.ariadne.orpt2.entities.RuoloUtente it.ariadne.orpt2.entities.Scheda it.ariadne.orpt2.entities.SchedaSalvata it.ariadne.orpt2.entities.Tipologia it.ariadne.orpt2.entities.Utente Thank you for your help. Mario

    Read the article

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • HSQLDB connection error

    - by user1478527
    I created a java program for connect the HSQLDB , the first one works well, public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:F:/hsqlTest/data/db"; public final static String DBNAME = "SA"; but these are not public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:C:/Program Files/tich Tools/mos tech/app/data/db/t1/t2/01/db"; public final static String DBNAME = "SA"; the error shows like this: java.sql.SQLException: error in script file line: 1 unexpected token: ? at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.JDBCConnection.<init>(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.getConnection(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at HSQLDBManagerImp.getconn(HSQLDBManagerImp.java:48) at TESTHSQLDB.main(TESTHSQLDB.java:15) Caused by: org.hsqldb.HsqlException: error in script file line: 1 unexpected token: ? at org.hsqldb.error.Error.error(Unknown Source) at org.hsqldb.scriptio.ScriptReaderText.readDDL(Unknown Source) at org.hsqldb.scriptio.ScriptReaderBase.readAll(Unknown Source) at org.hsqldb.persist.Log.processScript(Unknown Source) at org.hsqldb.persist.Log.open(Unknown Source) at org.hsqldb.persist.Logger.openPersistence(Unknown Source) at org.hsqldb.Database.reopen(Unknown Source) at org.hsqldb.Database.open(Unknown Source) at org.hsqldb.DatabaseManager.getDatabase(Unknown Source) at org.hsqldb.DatabaseManager.newSession(Unknown Source) ... 7 more Caused by: org.hsqldb.HsqlException: unexpected token: ? at org.hsqldb.error.Error.parseError(Unknown Source) at org.hsqldb.ParserBase.unexpectedToken(Unknown Source) at org.hsqldb.ParserCommand.compilePart(Unknown Source) at org.hsqldb.ParserCommand.compileStatement(Unknown Source) at org.hsqldb.Session.compileStatement(Unknown Source) ... 16 more I googled this connection question , but most of them not help much, someone says that may be the HSQLDB version problem. The dbase shut down in "SHUT COMPRESS" model. Anyone give some advice?

    Read the article

  • Approach for replacing forms authentication in .NET application

    - by Ash Machine
    My question is about an approach, and I am looking for tips or links to help me develop a solution. I have an .NET 4.0 web forms application that works with Forms authentication using the aspnetdb SQL database of users and passwords. A new feature for the application is a new authentication mechanism using single sign on to allow access for thousands of new users. Essentially, when the user logs in through the new single-sign-on method, I will be able to identify them as legitimate users with a role. So I will have something like HttpContext.Current.Session["email_of_authenticated_user"] (their identity) and HttpContext.Current.Session["role_of_authenticated_user"] (their role). Importantly, I don't necessarily want to maintain these users and roles redundantly in the aspnetdb database which will be retired, but I do want to use the session objects above to allow the user to pass through the application as if they were in passing through with forms authentication. I don't think CustomRoleProviders or CustomMemberProviders are helpful since they do not allow for creating session-level users. So my question is how to use the session level user and role that I do have to "mimic" all the forms authentication goodness like enforcing: [System.Security.Permissions.PrincipalPermission(System.Security.Permissions.SecurityAction.Demand, Role = "Student")] or <authorization> <allow users="wilma, barney" /> </authorization> Thanks for any pointers.

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • Excel VBA SQL Import

    - by user307655
    Hi All, I have the following code which imports data from a spreadsheet to SQL directly from Excel VBA. The code works great. However I am wondering if somebody can help me modify the code to: 1) Check if data from column A already exists in the SQL Table 2) If exists, then only update rather than import as a new role 3) if does not exist then import as a new role. Thanks again for your help Sub SQLIM() ' Send data to SQL Server ' This code loads data from an Excel Worksheet to an SQL Server Table ' Data should start in column A and should be in the same order as the server table ' Autonumber fields should NOT be included' ' FOR THIS CODE TO WORK ' In VBE you need to go Tools References and check Microsoft Active X Data Objects 2.x library Dim Cn As ADODB.Connection Dim ServerName As String Dim DatabaseName As String Dim TableName As String Dim UserID As String Dim Password As String Dim rs As ADODB.Recordset Dim RowCounter As Long Dim ColCounter As Integer Dim NoOfFields As Integer Dim StartRow As Long Dim EndRow As Long Dim shtSheetToWork As Worksheet Set shtSheetToWork = ActiveWorkbook.Worksheets("Sheet1") Set rs = New ADODB.Recordset ServerName = "WIN764X\sqlexpress" ' Enter your server name here DatabaseName = "two28it" ' Enter your database name here TableName = "COS" ' Enter your Table name here UserID = "" ' Enter your user ID here ' (Leave ID and Password blank if using windows Authentification") Password = "" ' Enter your password here NoOfFields = 7 ' Enter number of fields to update (eg. columns in your worksheet) StartRow = 2 ' Enter row in sheet to start reading records EndRow = shtSheetToWork.Cells(Rows.Count, 1).End(xlUp).Row ' Enter row of last record in sheet ' CHANGES ' Dim shtSheetToWork As Worksheet ' Set shtSheetToWork = ActiveWorkbook.Worksheets("Sheet1") '** Set Cn = New ADODB.Connection Cn.Open "Driver={SQL Server};Server=" & ServerName & ";Database=" & DatabaseName & _ ";Uid=" & UserID & ";Pwd=" & Password & ";" rs.Open TableName, Cn, adOpenKeyset, adLockOptimistic For RowCounter = StartRow To EndRow rs.AddNew For ColCounter = 1 To NoOfFields rs(ColCounter - 1) = shtSheetToWork.Cells(RowCounter, ColCounter) Next ColCounter Next RowCounter rs.UpdateBatch ' Tidy up rs.Close Set rs = Nothing Cn.Close Set Cn = Nothing End Sub

    Read the article

  • What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET ser

    - by Edward Tanguay
    I have read-only Silverlight and AJAX apps which read static text and XML files from a PHP/Apache server, which works very nicely with features such as asynchronous loading, lazy-loading only what I need for each page, loading in the background, developed a little query language to get a PHP script to create custom XML files etc. it's pragmatic read-only REST, and all works fast and fine for read-only sites. Now I want to also add the ability to write data from these apps to a database on the same PHP/Apache server. For those of you who have built similar data-access layers, what do I need to consider while building this, especially regarding security so that not just any client can write and alter my database, e.g.: check HTTP_USER_AGENT for security check REMOTE_ADDR for security require a special code for security, perhaps a list of TAN codes (such as banks use for online transactions) each which can only be used once, both the client and server have these I wonder if there is some kind of standard REST query I should lean on for e.g. building SQL-like statements in the URL parameters, e.g. http://www.thedatalayersite.com/query?insertinto=customers&... Any thoughts, notes from experience, ideas, gotchas, especially ideas on tightening down security in this endeavor would be helpful.

    Read the article

  • What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms?

    - by Simon
    What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms? I'm tryin to better understand this 'simple' question in order to determine whether or not existing pages I have in a (very) simple site can be easily converted from ASP.NET WebForms. Either a 'conversion' of the process below, or an alternative lifecycle would be what I'm looking for. What I'm currently doing: (yes i know that anyone capable of answering my question already knows all this -- i'm just tryin to get a comparison of the 'lifecycle' so i thought i'd start by filling in what we already all know) Rendering the page: I have a master page which contains my basic template I have content pages that give me named regions from the master page into which I put content. In an event handler for each content page I load data from the database (mostly read-only). I bind this data to ASP.NET controls representing grids, dropdowns or repeaters. This data all 'lives' inside the HTML generated. Some of it gets into ViewState (but I wont go into that too much!) I set properties or bind data to certain items like Image or TextBox controls on the page. The page gets sent to the client rendered as non-reusable HTML. I try to avoid using ViewState other than what the page needs as a minimum. Client side (not using ASP.NET AJAX): I may use JQuery and some nasty tricks to find controls on the page and perform operations on them. If the user selects from a dropdown -- a postback is generated which triggers a C# event in my codebehind. This event may go to the database, but whatever it does a completely newly generated HTML page ends up getting sent back to the client. I may use Page.Session to store key value pairs I need to reuse later So with MVC how does this 'lifecycle' change?

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >