Search Results

Search found 58499 results on 2340 pages for 'temporal data'.

Page 232/2340 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Reading Serial Data From C (OSX /dev/tty)

    - by Jud Stephenson
    I am trying to read data from a bluetooth barcode scanner (KDC300) using C. Here is the code I have so far, and the program successfully establishes a bluetooth connection to the scanner, but when a barcode is scanned, no input is displayed on the screen (Eventually more will be done with the data, but we have to get it working first, right). Here is the program: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <fcntl.h> #include <errno.h> #include <termios.h> #include <sys/ioctl.h> int main (int argc, const char * argv[]) { // define vars int STOP = 0; //char buf[255]; if(argv[1]) { int fd = open("/dev/tty.KDC1", O_RDONLY); if(fd == -1) { printf("%s", strcat("Unable to open /dev/tty.", argv[1])); } int res; while(STOP == 0) { while((res = read(fd,buf,255)) == 0); { if(res > 0) { buf[res]=0; printf("%s:%d\n", buf, res); if(buf[sizeof(buf)]=='\n') break; } } } } return 0; } If anyone has any ideas, I am at a loss on this so far. If it is any help, I can run screen /dev/tty.KDC1 and any barcodes scanned on the scanner appear in the terminal, I just can't do anything with the data. Jud

    Read the article

  • Always get exception when trying to Fill data to DataTable

    - by Sambath
    The code below is just a test to connect to an Oracle database and fill data to a DataTable. After executing the statement da.Fill(dt);, I always get the exception "Exception of type 'System.OutOfMemoryException' was thrown.". Has anyone met this kind of error? My project is running on VS 2005, and my Oracle database version is 11g. My computer is using Windows Vista. If I copy this code to run on Windows XP, it works fine. Thank you. using System.Data; using Oracle.DataAccess.Client; ... string cnString = "data source=net_service_name; user id=username; password=xxx;"; OracleDataAdapter da = new OracleDataAdapter("select 1 from dual", cnString); try { DataTable dt = new DataTable(); da.Fill(dt); // Got error here Console.Write(dt.Rows.Count.ToString()); } catch (Exception e) { Console.Write(e.Message); // Exception of type 'System.OutOfMemoryException' was thrown. } Update I have no idea what happens to my computer. I just reinstall Oracle 11g, and then my code works normally.

    Read the article

  • Unit test with live data

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with live data. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without live data, I would not fore sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality by it self?

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Binding data to the custom DataGridView

    - by Jatin Chaturvedi
    All, I have a customized DataGridView where I have implemented extra functionality and bounded datasource in the same custom DataGridView (Using C# and .NET). Now, I could able to use it properly by placing it on a panel control. I have added label as button on panel control to display data from datasource on to datagrid and create a binding source. Another Label which act as a button is used to update data from grid to databse. Issue: I pressed show label to display data in a dsatagridview. Modified the grid cell value and immediately pressed update label which is on same panel control. I observed that, the cursor is still in the grid cell when I press Save button. While saving, the cell value is null even though I have entered something in the presentation layer. My expected behaviour is to get the modified value while saving. Special Case: After typing something in the grid cell, if I click on somewhere else like the row below where I entered something, before I click on Save button, it is working fine. (Here, mainly I tried to remove the focus from the currently modified cell) Is there any way to bind sources before I click on save button? Please suggest me. Please feel free to ask me if you need any information. I have also seen same kind of problem on this forum, but unfortunately the author got the answer and didnt post it back. here is that URL: http://social.msdn.microsoft.com/Forums/en/winformsdesigner/thread/54dcc87a-adc2-4965-b306-9aa9e79c2946 Please help me.

    Read the article

  • Data is not saved in MS Access database

    - by qwerty
    Hello, I have a visual C# project and i'm trying to insert data in a MS Access Database when i press a button. Here is the code: private void button1_Click(object sender, EventArgs e) { try { OleDbDataAdapter adapter=new OleDbDataAdapter(); adapter.InsertCommand = new OleDbCommand(); adapter.InsertCommand.CommandText = "insert into Candidati values ('" + maskedTextBox1.Text.Trim() + "','" + textBox1.Text.Trim() + "', '" + textBox2.Text.Trim() + "', '" + textBox3.Text.Trim() + "','" + Convert.ToDouble(maskedTextBox2.Text) + "','" + Convert.ToDouble(maskedTextBox3.Text) + "')"; con.Open(); adapter.InsertCommand.Connection = con; adapter.InsertCommand.ExecuteNonQuery(); con.Close(); MessageBox.Show("Inregistrare adaugata cu succes!"); maskedTextBox1.Text = null; maskedTextBox2.Text = null; maskedTextBox3.Text = null; textBox1.Text = null; textBox2.Text = null; textBox3.Text = null; maskedTextBox1.Focus(); } catch (AdmitereException exc) { MessageBox.Show("A aparut o exceptie: "+exc.Message, "Eroare!", MessageBoxButtons.OK, MessageBoxIcon.Error); } } The connection string is: private static string connectionString; OleDbConnection con; public AddCandidati() { connectionString = "Provider=Microsoft.JET.OLEDB.4.0;Data Source=Admitere.mdb"; con = new OleDbConnection(connectionString); InitializeComponent(); } Where AddCandidati is the form. The data is not saved in the database, why? I have the .mdb file in the project folder.. What i'm doing wrong? I did not got any exception when i press the button.. Thanks!

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Essence of BiMap in Google collections

    - by littleEinstein
    I am still quite puzzled at the BiMap in Google collections/Guava. It was claimed that The two bimaps are backed by the same data; any changes to one will appear in the other. I browsed through the source code, and I found the use of delegate in ForwardingMap. But in any actually subclass of StandardBiMap, I do see the data are put into both the forward and reverse map. So what is the essence, and why it claims to have saved space by keeping only one copy of the data? Is it just the actual objects are one set, but two distinct sets of references to these objects are still needed, one set maintained in forward map, and the other in reverse map? private V putInBothMaps(K key, V value, boolean force) { boolean containedKey = containsKey(key); if (containedKey && Objects.equal(value, get(key))) { return value; } if (force) { inverse().remove(value); } else if (containsValue(value)) { throw new IllegalArgumentException( "value already present: " + value); } V oldValue = super.put(key, value); updateInverseMap(key, containedKey, oldValue, value); return oldValue; }

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • How Does MVC Handle Missing Data Requirements

    - by Don Bakke
    I'm teaching myself MVC concepts in hopes of applying them to a non-OO/procedural development environment. I am pretty sure I understand simple View - Request - Controller - Request - Model - Response - Controller - Response - View flow. What I am struggling with is understanding more complex scenarios. For instance, let's say I have a shopping cart form with a button for 'Calculate Shipping'. Normally a click on this button will follow the above flow. But what if there is missing data, like the zip code? Should the View verify this first and alert the user before making a 'Calculate Shipping' request? Or should the request be made and the Model returns a notification that critical data is missing? If the latter, does the Controller instruct the View to alert the user? What if I wanted to prompt the user for the missing zip code (perhaps in a popup input display) and then automatically request the 'Calculate Shipping' method again? I suppose this gets into the question of how smart a View ought to be. It seems that MVC has evolved due to richer UI and automation (such as with data-binding) and this muddies the water from a purist MVC perspective. Any thoughts are greatly appreciated.

    Read the article

  • Capture data read from file into string stream Java

    - by halluc1nati0n
    I'm coming from a C++ background, so be kind on my n00bish queries... I'd like to read data from an input file and store it in a stringstream. I can accomplish this in an easy way in C++ using stringstreams. I'm a bit lost trying to do the same in Java. Following is a crude code/way I've developed where I'm storing the data read line-by-line in a string array. I need to use a string stream to capture my data into (rather than use a string array).. Any help? char dataCharArray[] = new char[2]; int marker=0; String inputLine; String temp_to_write_data[] = new String[100]; // Now, read from output_x into stringstream FileInputStream fstream = new FileInputStream("output_" + dataCharArray[0]); // Convert our input stream to a BufferedReader BufferedReader in = new BufferedReader (new InputStreamReader(fstream)); // Continue to read lines while there are still some left to read while ((inputLine = in.readLine()) != null ) { // Print file line to screen // System.out.println (inputLine); temp_to_write_data[marker] = inputLine; marker++; }

    Read the article

  • DesignTime data not showing in Blend when bound against CollectionViewSource

    - by bitbonk
    I have datatemplate for a viewmodel where an itemscontrol is bound again a CollectionViewSource (to enable sorting in xaml). <DataTemplate x:Key="equipmentDataTemplate"> <Viewbox> <Viewbox.Resources> <CollectionViewSource x:Key="viewSource" Source="{Binding Modules}"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="ID" Direction="Ascending"/> </CollectionViewSource.SortDescriptions> </CollectionViewSource> </Viewbox.Resources> <ItemsControl ItemsSource="{Binding Source={StaticResource viewSource}}" Height="{DynamicResource equipmentHeight}" ItemTemplate="{StaticResource moduleDataTemplate}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </Viewbox> </DataTemplate> I have also setup the UserControl where all of this is defined to provide designtime data d:DataContext="{x:Static vm:DesignTimeHelper.Equipment}"> This is basically a static property that gives me an EquipmentViewModel that has a list of ModuleViewModels (Equipment.Modules). Now as long as I bind to the CollectionViewSource the designtime data does not show up in blend 3. When I bind to the ViewModel collection directly <ItemsControl ItemsSource="{Binding Modules}" I can see the designtime data. Any idea what I could do?

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Webpage data scraping using Java

    - by Gemma
    I am now trying to implement a simple HTML webpage scraper using Java.Now I have a small problem. Suppose I have the following HTML fragment. <div id="sr-h-left" class="sr-comp"> <a class="link-gray-underline" id="compare_header" rel="nofollow" href="javascript:i18nCompareProd('/serv/main/buyer/ProductCompare.jsp?nxtg=41980a1c051f-0942A6ADCF43B802');"> <span style="cursor: pointer;" class="sr-h-o">Compare</span> </a> </div> <div id="sr-h-right" class="sr-summary"> <div id="sr-num-results"> <div class="sr-h-o-r">Showing 1 - 30 of 1,439 matches, The data I am interested is the integer 1.439 shown at the bottom.I am just wondering how can I get that integer out of the HTML. I am now considering using a regular expression,and then use the java.util.Pattern to help get the data out,but still not very clear about the process. I would be grateful if you guys could give me some hint or idea on this data scraping. Thanks a lot.

    Read the article

  • Write a sql for updating data based on time

    - by Lu Lu
    Hello everyone, Because I am new with SQL Server and T-SQL, so I will need your help. I have 2 table: Realtime and EOD. To understand my question, I give example data for 2 tables: ---Realtime table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 // this day is not existed in EOD -> inserting BBC 1/3/2009 03:05:01 458 // this day is not existed in EOD -> inserting ABC 1/2/2009 03:05:01 326 // this day is new -> updating BBC 1/2/2009 03:05:01 454 // this day is new -> updating ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 ABC 1/2/2009 01:05:01 313 BBC 1/2/2009 01:05:01 423 ---EOD table--- Symbol Date Value ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 I will need to create a store procedure to update value of symbols. If data in day of a symbol is new (compare between Realtime & EOD), it will update value and date for EOD at that day if existing, otherwise inserting. And store will update EOD table with new data: ---EOD table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 BBC 1/3/2009 03:05:01 458 ABC 1/2/2009 03:05:01 326 BBC 1/2/2009 03:05:01 454 P/S: I use SQL Server 2005. And I have a similar answered question at here: http://stackoverflow.com/questions/2726369/help-to-the-way-to-write-a-query-for-the-requirement Please help me. Thanks.

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • iPhone Core Data does not refresh table

    - by Brian515
    Hi all, I'm trying to write an application with Core Data, and I have been able to successfully read and write to the core data database. However, if I write to the database in one view controller, my other view controllers will not see the change until the app is closed then reopened again. This is really frustrating. I'm not entirely sure how to get the refresh - (void)refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag method to work. How do I get a reference to my managed object? Anyways, here's the code I'm using to read the data back. This is in the viewDidLoad method. NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Website" inManagedObjectContext:managedObjectContext]; [request setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"siteName" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; NSMutableArray *mutableFetchResults = [[managedObjectContext executeFetchRequest:request error:&error] mutableCopy]; if(mutableFetchResults == nil) { //Handle the error } [self setNewsTitlesArray:mutableFetchResults]; [mutableFetchResults release]; [request release]; [newsSourcesTableView reloadData]; Thanks for help in advance!

    Read the article

  • set a default saving data in a selection box in HABTM model

    - by vincent low
    here is my problem which in my system. my system allow user to add event, which include date, time, places. In the other hand, user are allow to choose the "Event sharing to" some of other user. So i successful to created and share to other user, when the user login which are being choose to share with the event, he is able to view for that particular event. But the problem is, when user add the event, he must choose for his own name also in the select box. If not he will be unable to read the event that he had created. So i need to save a default data to my model, which is whenever the user got choose the sharing event or not, it also will save the data to his user id. here is my code for select box, what can i edit and do to let it set a default saving value which is always save with the user own data?? input('User',array( 'label' = 'Select Related Potential', 'options' = $users, //'id'='user', 'style'='width:250px;height:100px', //'selected' = $ownUserId )); ? i try to solve by adding 1 more row to the add.ctp, but the permission just set to the own user who created it, the other user i choose is unable to read. $form-input('User',array( 'label' = 'Select Related Potential', 'options' = $users, //'id'='user', 'style'='width:250px;height:100px', 'selected' = $ownUserId )); $form-input('User',array('type'='hidden','value'=$ownUserId));

    Read the article

  • populate FusionCharts with XML data AJAX

    - by Mick
    Hi I have a js file that uses ajax to get a XML doc from a php script . The XML file forms the data to draw a Fusion Chart. I know I am getting the XML data ok but FusionCharts will not draw it . I would really appreciate any help , thanks (FusionCharts.js is included earlier in my script) if(XMLHttpRequestObject) { XMLHttpRequestObject.open("GET", "chart.php?job="+job, true); XMLHttpRequestObject.onreadystatechange = function() { if (XMLHttpRequestObject.readyState == 4 && XMLHttpRequestObject.status == 200) { var xdoc = XMLHttpRequestObject.responseXML; var chart1 = new FusionCharts("Pie3D.swf", "chart1Id", "400", "300", "0", "1"); chart1.setDataXML(xdoc); chart1.render("chart1div"); chart.php produces this XML data <chart caption='ADI Chart Test ' > <set label='Driver' value='12.25' /> <set label='Other Staff' value='223.21' /> <set label='Equipment' value='0.00' /> <set label='Additional Items' value='0.00' /> <set label='Vehicle Fuel' value='0.00' /> <set label='Accomodation' value='0.00' /> <set label='Generator Fuel' value='0.00' /> </chart>

    Read the article

  • Export database data to csv from view by date range asp.net mvc3

    - by Benjamin Randal
    I am trying to find a way to export data from my database and save it as a .csv file. Ideally the user will be able to select a date range on a view, which will display the data to be exported, then the user can click an "export to CSV" link. I've done quite a bit of searching and but have not found much specific enough to help me step through the process. Any help would be great. I would like to export data from this database model... { public class InspectionInfo { [Key] public int InspectionId { get; set; } [DisplayName("Date Submitted")] [DataType(DataType.Date)] // [Required] public DateTime Submitted { get; set; } [DataType(DataType.MultilineText)] [MaxLength(1000)] // [Required] public string Comments { get; set; } // [Required] public Contact Contact { get; set; } [ForeignKey("Contact")] public Int32 ContactId { get; set; } [MaxLength(100)] public String OtherContact { get; set; } I have a service for search also, just having difficulty implementing public SearchResults SearchInspections(SearchRequest request) { using (var db = new InspectionEntities()) { var results = db.InspectionInfos .Where( i=> ( (null == request.StartDate || i.Submitted >= request.StartDate.Value) && (null == request.EndDate || i.Submitted <= request.EndDate.Value) ) ) .OrderBy(i=>i.Submitted) .Skip(request.PageSize*request.PageIndex).Take(request.PageSize); return new SearchResults{ TotalResults=results.Count(), PageIndex=request.PageIndex, Inspections=results.ToList(), SearchRequest=request }; } }

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >