Search Results

Search found 3546 results on 142 pages for 'dos batch'.

Page 127/142 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Neo4j Reading data / performing shortest path calculations on stored data

    - by paddydub
    I'm using the Batch_Insert example to insert Data into the database How can i read this data back from the database. I can't find any examples of how i do this. public static void CreateData() { // create the batch inserter BatchInserter inserter = new BatchInserterImpl( "var/graphdb", BatchInserterImpl.loadProperties( "var/neo4j.props" ) ); Map<String,Object> properties = new HashMap<String,Object>(); properties.put( "name", "Mr. Andersson" ); properties.put( "age", 29 ); long node1 = inserter.createNode( properties ); properties.put( "name", "Trinity" ); properties.remove( "age" ); long node2 = inserter.createNode( properties ); inserter.createRelationship( node1, node2, DynamicRelationshipType.withName( "KNOWS" ), null ); inserter.shutdown(); } I would like to store graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" ); What is the best method to store this kind data with 10000's of edges. Then run the Dijskra algorighm to find shortest path calculations using the stored graph data.

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

  • Ruby Gem Install question + answer(on windows vista Home Basic environment)

    - by Vamsi
    Recently I am having problems with installing rcov gem on my windows (vista Home Basic environment), so after googling I found one solution and that is gem install rcov -v 0.8.1.1.0 #version that installs without errors gem update rcov #update to the latest version, in my case rcov-0.8.1.2.0-x86-mswin32 But this solution didn't worked on my colleague's system (windows xp) and after that we came to know about RubyInstaller devkit for winddows But that dev kit is not working on my vista, when I tried gem install rcov in my command prompt, it game me this error, C:\Users\Vamsi>gem install rcov Building native extensions. This could take a while... ERROR: Error installing rcov: ERROR: Failed to build gem native extension.ERROR: Failed to build gem native extension. D:/Spritle/Programs/Ruby/bin/ruby.exe extconf.rb creating Makefile nmake 'nmake' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/ge ms/rcov-0.9.8 for inspection. Results logged to D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/gems/rcov-0.9.8/ext /rcovrt/gem_make.out So after that my colleague tried to install nmake as well but it was throwing some other error. Can some one suggest a better solution for solving this problems for all windows environments? I am aware of cygwin for windows but I am not sure that is an 100% solution either.

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

  • Flash CS4 compiler Error 1120 when embedding pngs into class instance variables.

    - by theolagendijk
    I have a Flash CS4 (Flash 9 ActionScript 3.0) project that compiles and runs perfectly on my machine. However it is part of a big batch of fla's that I want to compile on another (faster) machine. When I copy the project (the fla and all actionscripts and assets files) to the faster machine, it's Flash CS4 compiler gives me compiler error 1120 "Access of undefined property ButtonPause_PauseNormal". The property "PauseNormal" is an embedded png. The PNG is available. No transcoder errors. Here's the ActionScript for class "ButtonPause"; package nl.platipus.NissanESM.buttons { import flash.display.*; import flash.events.*; public class ButtonPause extends Sprite { [Embed(source="../../../../player/pause.png")] private var PauseNormal:Class; [Embed(source="../../../../player/pause_mo.png")] private var PauseMouseOver:Class; private var stateNormal:Bitmap; private var stateMouseOver:Bitmap; public function ButtonPause() { stateNormal = new PauseNormal(); stateNormal.width = 29; stateNormal.height = 14; stateNormal.alpha = 1; addChild(stateNormal); stateMouseOver = new PauseMouseOver(); stateMouseOver.width = 29; stateMouseOver.height = 14; stateMouseOver.alpha = 0; addChild(stateMouseOver); width = 29; height = 14; addEventListener(MouseEvent.MOUSE_OVER, handleMouseOver); addEventListener(MouseEvent.MOUSE_OUT, handleMouseOut ); } private function handleMouseOver(evt:MouseEvent):void { stateNormal.alpha = 0; stateMouseOver.alpha = 1; } private function handleMouseOut(evt:MouseEvent):void { stateNormal.alpha = 1; stateMouseOver.alpha = 0; } } } (Both machines run the exact same Flash CS4 Profesional Version 10.0.2 installation and both have the exact same publish settings and ActionScript 3.0 settings.) What's going on?

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • JMX - MBean automated registration on application deployment

    - by Gadi
    Hi All, I need some direction with JMX and J2EE. I am aware (after few weeks of research) that the JMX specification is missing as far as deployment is concerned. There are few vendor specific implementations for what I am looking for but none are cross vendor. I would like to automate the deployment of MBeans and registration with the Server. I need the server to load and register my MBeand when the application is deployed and remove when the application is un-deployed. I develop with: NetBean 6.7.1, GlassFish 2.1, J2EE5, EJB3 More specific, I need a way to manage timer service runs. My application need to run different archiving agents and batch reporting. I was hoping the JMX will give me remote access to create and manage the timer services and enable the user to create his own schedule. If the JMX is auto registered on application deployment the user can immediately connect and manage the schedule. On the other hand, how can an EJB connect/access an MBean? Many thanks in advance. Gadi.

    Read the article

  • How to convert a printer driver to a stand-alone console application which can generate a printer fi

    - by Thorbjørn Ravn Andersen
    I have a situation where the only way to generate a certain datafile is to print it manually to FILE: under Windows and save it in a file for further processing. I would really like to have a small stand-alone program which embeds this binary printer driver so I can run it from a batch file and have it generate that binary file for me, as we can then fully automate the "save file in Visio, 'print' it and upload it to the final destination and trigger a remote test". Is this possible with a suitable Windows SDK? I am a Java programmer, so I do not know Visual Studio and the possibilities with MSDN - yet! - but I'd appreciate pointers. EDIT: I have the installation files for that printer driver, both 32 and 64 bit. Older versions may include a 16 bit driver. EDIT: The "print to FILE:" functionality is just what was recommended by the documentation. I have played a little bit with using the LPR-protocol to see what it can do. I'd still prefer the "invoke small binary" approach.

    Read the article

  • SQL Server Blocking Issue

    - by Robin Weston
    We currently have an issue that occurs roughly once a day on SQL 2005 database server, although the time it happens is not consistent. Basically, the database grinds to a halt, and starts refusing connections with the following error message. This includes logging into SSMS: A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Our CPU usage for SQL is usually around 15%, but when the DB is in it's broken state it's around 70%, so it's clearly doing something, even if no-one can connect. Even if I disable the web app that uses the database the CPU still doesn't go down. I am unable to restart the SQLSERVER process as it is unresponsive, so I have to end up killing the process manually, which then puts the DB into Suspect/Recovery mode (which I can fix but it's a pain). Below are some PerfMon stats I gathered when the DB was in it's broken state which might help. I have a bunch more if people want to request them: Active Transactions: 2 (Never Changes) Logical Connections: 34 (NC) Process Blocked: 16 (NC) User Connections: 30 (NC) Batch Request: 0 (NC) Active Jobs: 2 (NC) Log Truncations: 596 (NC) Log Shrinks: 24 (NC) Longest Running Transaction Time: 99 (NC) I guess they key is finding out what the DB is using it's CPU on, but as I can't even log into SSMS this isn't possible with the standard methods. Disturbingly, I can't even use the dedicated admin connection to get into SSMS. I get the same timout as with all other requests. Any advice, reccomendations, or even sympathy, is much appreciated!

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • How do I browse a Websphere MQ message without removing it?

    - by jmgant
    I'm writing a .NET Windows Forms application that will post a message to a Websphere MQ queue and then poll a different queue for a response. If a response is returned, the application will partially process the response in real time. But the response needs to stay in the queue so that a daily batch job, which also reads from the response queue, can do the rest of the processing. I've gotten as far as reading the message. What I haven't been able to figure out is how to read it without removing it. Here's what I've got so far. I'm an MQ newbie, so any suggestions will be appreciated. And feel free to respond in C#. Public Function GetMessage(ByVal msgID As String) As MQMessage Dim q = ConnectToResponseQueue() Dim msg As New MQMessage() Dim getOpts As New MQGetMessageOptions() Dim runThru = Now.AddMilliseconds(CInt(ConfigurationManager.AppSettings("responseTimeoutMS"))) System.Threading.Thread.Sleep(1000) 'Wait for one second before checking for the first response' While True Try q.Get(msg, getOpts) Return msg Catch ex As MQException When ex.Reason = MQC.MQRC_NO_MSG_AVAILABLE If Now > runThru Then Throw ex System.Threading.Thread.Sleep(3000) Finally q.Close() End Try End While Return Nothing 'Should never reach here' End Function NOTE: I haven't verified that my code actually removes the message. But that's how I understand MQ to work, and that appears to be what's happening. Please correct me if that's not the default behavior.

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Script to add user to MediaWiki

    - by Marquis Wang
    I'm trying to write a script that will create a user in MediaWiki, so that I can run a batch job to import a series of users. I'm using mediawiki-1.12.0. I got this code from a forum, but it doesn't look like it works with 1.12 (it's for 1.13) $name = 'Username'; #Username (MUST start with a capital letter) $pass = 'password'; #Password (plaintext, will be hashed later down) $email = 'email'; #Email (automatically gets confirmed after the creation process) $path = "/path/to/mediawiki"; putenv( "MW_INSTALL_PATH={$path}" ); require_once( "{$path}/includes/WebStart.php" ); $pass = User::crypt( $pass ); $user = User::createNew( $name, array( 'password' => $pass, 'email' => $email ) ); $user->confirmEmail(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); Thanks!

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • How to use MSBuild to target multiple versions of .NET Framework?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/soln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software (e.g. NAnt). I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >