Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 161/402 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Client-Server Networking Between PHP Client and Java Server

    - by Muhammad Yasir
    Hi there, I have a university project which is already 99% completed. It consists of two parts-website (PHP) and desktop (Java). People have their accounts on the website and they wish to query different information regarding their accounts. They send an SMS which is received by desktop application which queries database of website (MySQL) and sends the reply accordingly. This part is working superbly. The problem is that some times website wishes to instruct the desktop application to send a specific SMS to a particular number. Apparently there seems no way other than putting all the load to the DB server... This is how I made it work. Website puts SMS jobs in a specific table. Java application polls this table again and again and if it finds a job, it executes it. Even this part is working correctly but unfortunately it is not acceptable by my university to poll the DB like this. :( The other approach I could think of is to use client-server one. I tried making Java server and its PHP client. So that whenever an SMS is to be sent, the website opens a socket connection to desktop application and sends two strings (cell # and SMS message). Unfortunately I am unable to do this. I was successfully to make a Java server which works fine when connected by a Java client, similarly my PHP client connects correctly to a PHP server, but when I try to cross them, they start hating each other... PHP shows no error but Java gives StreamCorruptedException when it tries to read header of input stream. Could someone please tell what I can try to make PHP client and Java server work together? Or if the said purpose can be achieved by another means, how? Regards, Yasir

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

  • Issues Connecting to SQLExpress using Oracle SQL Developer

    - by ArtDeveloper
    Hey Guys, I'm trying to create a connection inside Oracle SQL Developer to a SQLExpress database I have Everything I have resides on the same machine so there isn't any network issues I should have to deal with but everytime I follow the instructions and I try to connect I get the following message "Failure - Unable to get information from SQL Server: localhost." I can connect to the SQLExpress DB using the SQL Management Studio and through an ODBC connection. I've installed the third party extensions and I've enabled the TCP protocol on the SQL Server Configuration manager as well as enabled the IP Addresses I'm assuming that the SQLExpress Database is on port 1433 because I didn't change this when I installed but if someone can tell me how to double check that I would appreciate that info as well. I setup the new connection with the following information name: databasename I'm using windows authentication so the username and password aren't filled in host:localhost port:1433/databasename;instance=SQLEXPRESS *databasename - this is replaced with the actual DB name I've just changed the name here to protect the innocent I've spent about a full day on this trying to get it connected and many google attempts where other ppl have had this issue but have gotten it solved through various methods that I've tried and it hasn't resolved my issue. Any information would be much appreciated Thank you in Advance, AD

    Read the article

  • HTML text input and using the input as a variable in a script(tcl)/sql(sqlite)

    - by Fantastic Fourier
    Hello all, I'm very VERY new at this whole web thing. And I'm just very confused in general. Basically, what I want to do is take an input via text using HTML and adding that input to database, table trans. Should be simple but I am lost. <li>Transaction Number</li> <li><input type=|text| name=|tnumber| </li> // do i need to use value? <li>Employee Name</li> <li><input type=|text| name=|ename| </li> <li><input type=|SUBMIT| value=|Add|></li> ...... ...... sqlite3 db $::env(ROOT)/database.db mb eval {INSERT INTO trans VALUES ($tnumber, $ename} mb close They are both in a same file and there are only two fields to the database to keep things simple. What I can see here is that tnumber and ename aren't declared as variables. So how do I do that so that the text input is assigned to respective variables?

    Read the article

  • UiPickerView change font color according data

    - by Fulkron
    I'm using a pickerView with multiple components related to several fields in a Database (CoreData). Is it possible to change the fontcolor for a specific component according the presence of data in the DB ? For example the field in the DB is null the component font color should be RED otherwise black. Any help will be appreciated ! Dario ================== Thanks Kenny, I have to apply to a single UIPicker only. So I', returning the view parametere (without modificatiosn). The result is all the pickers show empty rows. Thanks for help ! Here you will find the code fragment: - (UIView *)pickerView:(UIPickerView *)pickerView viewForRow:(NSInteger)row forComponent:(NSInteger)component reusingView:(UIView *)view { if (pickerView == tipoPk){ UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 100,30)]; label.textColor = [UIColor redColor]; switch (component) { case PK_Tipo: label.text = [tipoArray objectAtIndex:row]]; break; case PK_Settore: label.text = [settoreArray objectAtIndex:row]]; break; default: break; } return label; } else { return view; // <==== return view for non related pickerviews , but no rows shown } }

    Read the article

  • New to MVVM - Best practices for seperating Data processing thread and UI Thread?

    - by OffApps Cory
    Good day. I have started messing around with the MVVP pattern, and I am having some problems with UI responsiveness versus data processing. I have a program that tracks packages. Shipment and package entities are persisted in SQL database, and are displayed in a WPF view. Upon initial retrieval of the records, there is a noticeable pause before displaying the new shipments view, and I have not even implemented the code that counts shipments that are overdue/active yet (which will necessitate a tracking check via web service, and a lot of time). I have built this with the Ocean framework, and all appears to be doing well, except when I first started my foray into multi-threading. It broke, and it appeared to break something in Ocean... Here is what I did: Private QueryThread As New System.Threading.Thread(AddressOf GetShipments) Public Sub New() ' Insert code required on object creation below this point. Me.New(ViewManagerService.CreateInstance, ViewModelUIService.CreateInstance) 'Perform initial query of shipments 'QueryThread.Start() GetShipments() Console.WriteLine(Me.Shipments.Count) End Sub Public Sub New(ByVal objIViewManagerService As IViewManagerService, ByVal objIViewModelUIService As IViewModelUIService) MyBase.New(objIViewModelUIService) End Sub Public Sub GetShipments() Dim InitialResults = From shipment In db.Shipment.Include("Packages") _ Select shipment Me.Shipments = New ShipmentsCollection(InitialResults, db) End Sub So I declared a new Thread, assigned it the GetShipments method and instanced it in the default constructor. Ocean freaks out at this, so there must be a better way of doing it. I have not had the chance to figure out the usage of the SQL ORM thing in Ocean so I am using Entity Framework (perhaps one of these days i will look at NHibernate or something too). Any information would be greatly appreciated. I have looked at a number of articles and they all have examples of simple uses. Some have mentioned the Dispatcher, but none really go very far into how it is used. Anyone know any good tutorials? Cory

    Read the article

  • appstats broken filename in callstack

    - by Ray Yun
    When I visit appstats page and expand callstack, the file path has <path[N]> prefix. So click the file link then emit no such file or directory error. Stack: /google/appengine/datastore/datastore_rpc.py:951 make_rpc_call() /google/appengine/datastore/datastore_query.py:993 _make_query_result_rpc_call() /google/appengine/datastore/datastore_query.py:714 run_async() /google/appengine/datastore/datastore_query.py:685 run() /google/appengine/api/datastore.py:1281 GetBatcher() /google/appengine/api/datastore.py:1351 Get() /google/appengine/ext/db/init.py:1831 fetch() /google/appengine/ext/db/init.py:1778 get() /apps/fbapp/fbutil.py:232 oauth_load_fb_user() /apps/fbapp/fbutil.py:84 require_account() the error message for appengine source: [Errno 2] No such file or directory: u'/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/ipaddr/google/appengine/datastore/datastore_rpc.py' the error message for my source: IOError [Errno 2] No such file or directory: u'/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/antlr3/apps/fbapp/fbutil.py' I guess this was path problem and found some official comment from google. If your request handlers modify sys.path, you must make the same modifications to sys.path in appengine_config.py so the Appstats web interface can see all files. Actually I'm using appengine_django and two path was inserted to sys.path. I did it same again at appengine_django.py but also failed. Maybe some custom setting with appengine_config.py can solve this problem but doesn't figure out how to fix it. What can I do?

    Read the article

  • EJB3 - @Column(insertable="false") question

    - by WhiteTigerK
    Hi All, I'm building a J2SE application with EJB3 and an Oracle Express Edition DB. My problem is like that - I set an EntityBean in my project which matches a table in the DB. The table contains a column which is not nullable and has a default value. All I want is that when persisting a new data to this table using the EJB, the column's value will get its default value. This is how I set it in the project: //holds user's first name @Basic(optional = true) @Column(name = "FIRST_NAME", insertable = false, updatable = true, nullable = false) private String m_firstName; I also set it in the ORM.XML file: <basic name="firstName"> <column name="FIRST_NAME" insertable="false" updatable="true" nullable="false"/> </basic> But for some reason, when creating a new EntityBean and not setting the first name field, and then trying to persist it, i get the following exception: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))): oracle.toplink.essentials.exceptions.DatabaseException Internal Exception: java.sql.SQLException: ORA-01400: cannot insert NULL into ("TSDB"."USERS"."FIRST_NAME") Which means that the persistence manager tries to insert the first name field although I told it not to. Am I doing something wrong here ? Thanks!

    Read the article

  • Node.js + express.js + passport.js : stay authenticated between server restart

    - by Arnaud Rinquin
    I use passport.js to handle auth on my nodejs + express.js application. I setup a LocalStrategy to take users from mongodb My problems is that users have to re-authenticate when I restart my node server. This is a problem as I am actively developing it and don't wan't to login at every restart... (+ I use node supervisor) Here is my app setup : app.configure(function(){ app.use('/static', express.static(__dirname + '/static')); app.use(express.bodyParser()); app.use(express.methodOverride()); app.use(express.cookieParser()); app.use(express.session({secret:'something'})); app.use(passport.initialize()); app.use(passport.session()); app.use(app.router); }); And session serializing setup : passport.serializeUser(function(user, done) { done(null, user.email); }); passport.deserializeUser(function(email, done) { User.findOne({email:email}, function(err, user) { done(err, user); }); }); I tried the solution given on this blog using connect-mongodb without success app.use(express.session({ secret:'something else', cookie: {maxAge: 60000 * 60 * 24 * 30}, // 30 days store: MongoDBStore({ db: mongoose.connection.db }) }));

    Read the article

  • OleDBDataAdapter UNPIVOT Query not working with Microsoft.ACE.OLEDB.12.0 DataSource

    - by JayT
    I am reading in an excel file with an OleDBDataAdapter. I am using a select statement to UNPIVOT the data and insert into DataSet. However, the compiler is genereating this error: {"Syntax error in FROM clause."} But the SQL Statement is correct as I have used it in other DB's Here is the code: string strConn = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + FileName + ";Extended Properties=\"Excel 12.0 Xml;HDR=" + HDR + ";IMEX=1\""; OleDbConnection conn = new OleDbConnection(strConn); conn.Open(); string SQL = "select Packhouse, Rm, Quantity , Product " + " FROM " + " ( " + " SELECT Date,Packhouse, Rm,[FG XL], [FG L] " + " FROM [" + xlSheet + "] " + " ) Main " + " UNPIVOT " + " ( " + " Quantity FOR Product in ([FG XL], [FG L]) " + " ) Sub " + " WHERE (Date = '2010/03/08') and Quantity <> '0' and Packhouse = 'A' and Rm = '1' "; OleDbDataAdapter adapter = new OleDbDataAdapter(); adapter.SelectCommand = new OleDbCommand(SQL, conn); ds[sequencecounter] = new DataSet(); adapter.Fill(ds[sequencecounter], xlSheet); If I copy and paste the excel data into a DB, then the select query works, but the data presented to me is in excel spreadsheets. If anyone could provide help on this it will be much appreciated. Regards, J

    Read the article

  • Getting a ResultSet/RefCursor over a database link

    - by JonathanJ
    From the answers to http://stackoverflow.com/questions/1122175/calling-a-stored-proc-over-a-dblink it seems that it is not possible to call a stored procedure and get the ResultSet/RefCursor back if you are making the SP call across a remote DB link. We are also using Oracle 10g. We can successfully get single value results across the link, and can successfully call the SP and get the results locally but we get the same 'ORA-24338: statement handle not executed' error when reading the ResultSet from the remote DB. My question - is there any workaround to using the stored procedure? Is a shared view a better solution? Piped rows? Sample Stored Procedure: CREATE OR REPLACE PACKAGE BODY example_SP IS PROCEDURE get_terminals(p_CD_community IN community.CD_community%TYPE, p_cursor OUT SYS_REFCURSOR) IS BEGIN OPEN p_cursor FOR SELECT cd_terminal FROM terminal t, community c WHERE c.cd_community = p_CD_community AND t.id_community = c.id_community; END; END example_SP; / Sample Java code that works locally but not remotely: Connection conn = DBConnectionManagerFactory.getDBConnectionManager().getConnection(); CallableStatement cstmt = null; ResultSet rs = null; String community = "EXAMPLE"; try { cstmt = conn.prepareCall("{call example_SP.get_terminals@remote_address(?,?)}"); cstmt.setString(1, community); cstmt.registerOutParameter(2, OracleTypes.CURSOR); cstmt.execute(); rs = (ResultSet)cstmt.getObject(2); while (rs.next()) { LogUtil.getLog().logInfo("Terminal code=" + rs.getString( "cd_terminal" )); } }

    Read the article

  • ASP.NET MVC pagination problem????

    - by MD_Oppenheimer
    OK, This is starting to get mildly irritating. I tried to implement Twitter style paging using ASP.NET MVC and JQuery my problem is that when not using Request.IsAjaxRequest() (for users with javascript turned off) it works fine, obviously posting back the whole page. when I run the code for Request.IsAjaxRequest(), it skips entries, and does not return result in order. this is the code I have: public ActionResult Index(int? startRow) { StatusUpdatesRepository statusUpdatesRepository = new StatusUpdatesRepository(); if (!startRow.HasValue) startRow = Globals.Settings.StatusUpdatesSection.StatusUpdateCount;//5 Default starting row //Retrieve the first page with a page size of entryCount int totalItems; if (Request.IsAjaxRequest()) { IEnumerable<StatusUpdate> PagedEntries = statusUpdatesRepository.GetLastStatusUpdates(startRow.Value,Globals.Settings.StatusUpdatesSection.StatusUpdateCount, out totalItems); if (startRow < totalItems) AddMoreUrlToViewData(startRow.Value); return View("StatusUpdates", PagedEntries); } //Retrieve the first page with a page size of global setting // First run skip 0 take 5 IEnumerable<StatusUpdate> entries = statusUpdatesRepository.GetLastStatusUpdates(0,startRow.Value, out totalItems); if (startRow < totalItems) AddMoreUrlToViewData(startRow.Value); return View(entries); } private void AddMoreUrlToViewData(int entryCount) { ViewData["moreUrl"] = Url.Action("Index", "Home", new { startRow = entryCount + Globals.Settings.StatusUpdatesSection.StatusUpdateCount }); } My GetLastStatusUpdates function: public IQueryable GetLastStatusUpdates(int startRowIndex, int maximumRows,out int statusUpdatesCount ) { statusUpdatesCount = db.StatusUpdates.Count(); return db.StatusUpdates .Skip(startRowIndex) .Take(maximumRows) .OrderByDescending(s = s.AddedDate); } Really fresh out out of ideas as to why this is not working properly when responding to a Request.IsAjaxRequest(), ie when I turn of javascript in the browser, the code works perfectly, except I don't want to repost the whole page????

    Read the article

  • Linq-to-sql Compiled Query returns object NOT belonging to submitted DataContext ?

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is returning result from another data context [TestMethod] public void GetMachinesTest() { using (var unitOfWork = IoC.Get<IUnitOfWork>()) { // Compile Query var machine = Machines.GetMachineById(unitOfWork, 3); } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // Get From Repository var machineFromRepository = machineRepository.Find(m => m.MachineID == 2).SingleOrDefault(); var machine = Machines.GetMachineById(unitOfWork, 2); VerifyHuskyHostMachine(machineFromRepository, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); VerifyHuskyHostMachine(machine, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); Assert.AreSame(machineFromRepository, machine); // FAIL } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. Another Important information is that this test is under TransactionScope! UPDATE: It looks like next link is describing similar problem (is this bug solved ?): http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/9bcffc2d-794e-4c4a-9e3e-cdc89dad0e38

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • SubmitChanges doesn't save but removes inserts from change set, no errors

    - by winston schröder
    Hi Everybody, I have a deeper question regarding debug functionality of Linq to Sql SubmitChanges() Function. I want to save a record in a table of a locally cached db (localdbcache: server SqlExpress 2008 client SqlCE). Before calling SubmitChanges I can find the new item via DataContext.GetChangeSet(). After calling Submit Changes, the items to insert have been removed from the ChangeSet. (That's what this function is supposed to do.) There are no Changes Conflicts and no error in the db's log output. No Exception at all. The table's Count stays at the same value. if ((e.Parameter == null) || (!e.Parameter.GetType().Equals(typeof(LibDB.Client.Vehicles)))) return; LibDB.Client.Vehicles tmp = e.Parameter as LibDB.Client.Vehicles; try { ChangeSet cs = this._dc.GetChangeSet(); if ((tmp == null) || (this._dc == null)) return; if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 0) this._dc.Vehicles.InsertOnSubmit(tmp); else if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) this._dc.Vehicles.Attach(tmp, true); else return; using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); //this._dc.Refresh(RefreshMode.OverwriteCurrentValues, this._dc.Vehicles); } catch (Exception ex) { Console.WriteLine(ex.Message); } } if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) MessageBox.Show("Vehicle not saved."); this.vehSelector.ResetLayout(); } I would appreciate any help since I'm loosing hope to find any error, Thanks in Advance Winston

    Read the article

  • Remove ActiveRecord in Rails 3 (beta)

    - by Splash
    Now that Rails 3 beta is out, I thought I'd have a look at rewriting an app I have just started work on in Rails 3 beta, both to get a feel for it and get a bit of a head-start. The app uses MongoDB and MongoMapper for all of its models nad therefore has no need for ActiveRecord. In the previous version, I am unloading activerecord in the following way: config.frameworks -= [ :active_record ] # inside environment.rb In the latest version this does not work - it just throws an error: /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/configuration.rb:126:in `frameworks': config.frameworks in no longer supported. See the generated config/boot.rb for steps on how to limit the frameworks that will be loaded (RuntimeError) from *snip* Of course, I have looked at the boot.rb as it suggested, but as far as I can see, there is no clue here as to how I might go about unloading AR. The reason I need to do this is because not only is it silly to be loading something I don't want, but it is complaining about its inability to make a DB connection even when I try to run a generator for a controller. This is because I've wiped database.yml and replaced it with connection details for MongoDB in order to use this gist for using database.yml for MongoDB connection details. Not sure why it needs to be able to initiate a DB connection at all just to generate a controller anyway.... Is anyone aware of the correct Rails 3 way of doing this?

    Read the article

  • Conversion failed when converting datetime from character string. Linq To SQL & OpenXML

    - by chobo2
    Hi I been following this tutorial on how to do a linq to sql batch insert. http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx However I have a datetime field in my database and I keep getting this error. System.Data.SqlClient.SqlException was unhandled Message="Conversion failed when converting datetime from character string." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=7 Number=241 Procedure="spTEST_InsertXMLTEST_TEST" Server="" State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) I am not sure why when I just take the datetime in the generated xml file and manually copy it into sql server 2005 it has no problem with it and converts it just fine. This is my SP CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO UserTable(CreateDate) SELECT XMLProdTable.CreateDate FROM OPENXML(@hDoc, 'ArrayOfUserTable/UserTable', 2) WITH ( CreateDate datetime ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code using (TestDataContext db = new TestDataContext()) { UserTable[] testRecords = new UserTable[1]; for (int count = 0; count < 1; count++) { UserTable testRecord = new UserTable() { CreateDate = DateTime.Now }; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(UserTable[])); serializer.Serialize(sWriter, testRecords); db.spTEST_InsertXMLTEST_TEST(sBuilder.ToString()); } Rendered XML Doc <?xml version="1.0" encoding="utf-16"?> <ArrayOfUserTable xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <UserTable> <CreateDate>2010-05-19T19:35:54.9339251-07:00</CreateDate> </UserTable> </ArrayOfUserTable>

    Read the article

  • Help Converting T-SQL Query to LINQ Query

    - by campbelt
    I am new to LINQ, and so am struggle over some queries that I'm sure are pretty simple. In any case, I have been hiting my head against this for a while, but I'm stumped. Can anyone here help me convert this T-SQL query into a LINQ query? Once I see how it is done, I'm sure I'll have some question about the syntax: SELECT BlogTitle FROM Blogs b JOIN BlogComments bc ON b.BlogID = bc.BlogID WHERE b.Deleted = 0 AND b.Draft = 0 AND b.[Default] = 0 AND bc.Deleted = 0 GROUP BY BlogTitle ORDER BY MAX([bc].[Timestamp]) DESC Just to show that I have tried to solve this on my own, here is what I've come up with so far, though it doesn't compile, let alone work ... var iqueryable = from blog in db.Blogs join blogComment in db.BlogComments on blog.BlogID equals blogComment.BlogID where blog.Deleted == false && blog.Draft == false && blog.Default == false && blogComment.Deleted == false group blogComment by blog.BlogID into blogGroup orderby blogGroup.Max(blogComment => blogComment.Timestamp) select blogGroup;

    Read the article

  • How to Convert using of SqlLit to Simple SQL command in C#

    - by Nasser Hajloo
    I want to get start with DayPilot control I do not use SQLLite and this control documented based on SQLLite. I want to use SQL instead of SQL Lite so if you can, please do this for me. main site with samples http://www.daypilot.org/calendar-tutorial.html The database contains a single table with the following structure CREATE TABLE event ( id VARCHAR(50), name VARCHAR(50), eventstart DATETIME, eventend DATETIME); Loading Events private DataTable dbGetEvents(DateTime start, int days) { SQLiteDataAdapter da = new SQLiteDataAdapter("SELECT [id], [name], [eventstart], [eventend] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["db"].ConnectionString); da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Update private void dbUpdateEvent(string id, DateTime start, DateTime end) { using (SQLiteConnection con = new SQLiteConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SQLiteCommand cmd = new SQLiteCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } }

    Read the article

  • OutOfMemory exception when loading an image in .Net

    - by Ben
    Hi, Im loading an image from a SQL CE db and then trying to load that into a PictureBox. I am saving the image like this: if (ofd.ShowDialog() == DialogResult.OK) { picArtwork.ImageLocation = ofd.FileName; using (System.IO.FileStream fs = new System.IO.FileStream(ofd.FileName, System.IO.FileMode.Open)) { byte[] imageAsBytes = new byte[fs.Length]; fs.Read(imageAsBytes, 0, imageAsBytes.Length); thisItem.Artwork = imageAsBytes; fs.Close(); } } and then saving to the Db using LINQ To SQL. I load the image back like so: using (FileStream fs = new FileStream(@"C:\Temp\img.jpg", FileMode.CreateNew ,FileAccess.Write )) { byte[] img = (byte[])encoding.GetBytes(ThisFilm.Artwork.ToString()); fs.Write(img, 0, img.Length); } but am getting an OutOfMemoryException. I have read that this is a slight red herring and that there is probably something wrong with the filetype, but i cant figure what. Any ideas? Thanks picArtwork.Image = System.Drawing.Bitmap.FromFile(@"C:\Temp\img.jpg");

    Read the article

  • What is causing this OverflowError in Django?

    - by orokusaki
    I'm using a normal ModelForm.save() to create an object, and this exception comes up. It worked fine before until I added commit_manually, transaction.rollback() and transaction.commit() to my view. Has anyone else ran into this? Is this because of sqlite3? OverflowError: long too big to convert C:\Python26\Lib\site-packages\django-trunk\django\db\backends\sqlite3\base.py in execute, line 197 params: (203866156270872165269663274649746494334L,) query: u'SELECT (1) AS "a", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."id" = ? LIMIT 1' self <django.db.backends.sqlite3.base.SQLiteCursorWrapper object at 0x015D5A98> Why would that L param be passed in, and

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • like exec command in silverlight(save and load properties of Elements dynamically)

    - by Meysam Javadi
    i have some element in my container and want to save all properties of this elements. i list this element by VisualTreeHelper and save its attributes in DB, question is that how to retrieve this properties and affect them? i think that The Silverlight have some statement that behave like Exec in Sql-Server. i save properties in one line that delimited by semicolon.(if you have any suggestion ,appreciate) Edit: suppose this scenario: End-User choose a tool from Mytoolbox(a container like Grid) ,a dialog shown its properties for creation and finally draw Grid . in resumption he/she choose one element(like Button) and drop it on one of the grid's cell. now i want to save workspace that he/she created! My RootLayout have one container control so any of element are child of this.HERETOFORE i want create one string that contain all general properties(not all of them) and save to DB, and when i load this control, i create an element by the type that i saved and affect it by the properties that i saved; with something like EXEC command. is this possible ? have you new approach for this scenario(Guide me with example please).

    Read the article

  • RIA Services: Inserting multiple presentation-model objects

    - by nlawalker
    I'm sharing data via RIA services using a presentation model on top of LINQ to SQL classes. On the Silverlight client, I created a couple of new entities (album and artist), associated them with each other (by either adding the album to the artist's album collection, or setting the Artist property on the album - either one works), added them to the context, and submitted changes. On the server, I get two separate Insert calls - one for the album and one for the artist. These entitites are new so their ID values are both set to the default int value (0 - keep in mind that depending on my DB, this could be a valid ID in the DB) because as far as I know you don't set IDs for new entities on the client. This all would work fine if I was transferring the LINQ to SQL classes via my RIA services, because even though the Album insert includes the Artist and the Artist insert includes the Album, both are Entities and the L2S context recognizes them. However, with my custom presentation model objects, I need to convert them back to the LINQ to SQL classes maintaining the associations in the process so they can be added to the L2S context. Put simply, as far as I can tell, this is impossible. Each entity gets its own Insert call, but there's no way you can just insert the one entity because without IDs the associations are lost. If the database used GUID identifiers it would be a different story because I could set those on the client. Is this possible, or should I be pursuing another design?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >