Search Results

Search found 58499 results on 2340 pages for 'temporal data'.

Page 200/2340 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Storing statistics of multple data types in SQL Server 2008

    - by Mike
    I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be. When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations. This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain? Wondering if I'm missing something here. Is there a better way to do this? Thanks in advance!

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • Windows Programming: ID2D1Bitmap Interface - Getting the Bitmap Data

    - by LostInBrackets
    I've been writing my own library of functions to access some of the new Direct2D Windows libraries. In particular, I've been working on the ID2D1Bitmap interface. I wanted to write a function to return a pointer to the start of the bitmap data (for the editing of particular pixels, or custom encoding or whatever else I might wish for in the future). Unfortunately... problem ahead... I can't seem to find a way to get access to the raw pixel data from the ID2D1Bitmap Interface. Does anyone have an idea how to access this? One of my friends suggested drawing the bitmap to a surface and extracting the bitmap data from there. I don't know if this would work. It definitely seems inefficient and I wouldn't know which kind of surface to use. Any help is appreciated. (c++ in particular, but I assume the code won't be tooo different between languages) (I know I could just read in the data direct from the file, but I'm using the WIC decoders which means it could be in any number of indecipherable formats)

    Read the article

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • iPhone POST to PHP failing

    - by Alexander
    I have this code here: NSString *post = [NSString stringWithFormat:@"deviceIdentifier=%@&deviceToken=%@",deviceIdentifier,deviceToken]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:NO]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://website.com/RegisterScript.php"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setValue:@"MyApp-V1.0" forHTTPHeaderField:@"User-Agent"]; [request setHTTPBody:postData]; NSData *urlData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *response = [[NSString alloc] initWithData:urlData encoding:NSASCIIStringEncoding]; which should be sending data to my PHP server to register the device into our database, but for some of my users no POST data is being sent. I've been told that this line: NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:NO]; may be causing the problem. Any thoughts on why this script would sometimes not send the POST data? On the server I got the packet trace for a failed send and the Content-lenght came up as 0 so no data was sent at all. Thanks for any help

    Read the article

  • CURL & web.py: transfer closed with outstanding read data remaining

    - by Richard J
    Hi Folks, I have written a web.py POST handler, thus: import web urls = ('/my', 'Test') class Test: def POST(self): return "Here is your content" app = web.application(urls, globals()) if __name__ == "__main__": app.run() When I interact with it using Curl from the command line I get different responses depending on whether I post it any data or not: curl -i -X POST http://localhost:8080/my HTTP/1.1 200 OK Transfer-Encoding: chunked Date: Thu, 06 Jan 2011 16:42:41 GMT Server: CherryPy/3.1.2 WSGI Server Here is your content (Posting of no data to the server gives me back the "Here is your content" string) curl -i -X POST --data-binary "@example.zip" http://localhost:8080/my HTTP/1.1 100 Content-Length: 0 Content-Type: text/plain HTTP/1.1 200 OK Transfer-Encoding: chunked Date: Thu, 06 Jan 2011 16:43:47 GMT Server: CherryPy/3.1.2 WSGI Server curl: (18) transfer closed with outstanding read data remaining (Posting example.zip to the server results in this error) I've scoured the web.py documentation (what there is of it), and can't find any hints as to what might be going on here. Possibly something to do with 100 continue? I tried writing a python client which might help clarify: h1 = httplib.HTTPConnection('localhost:8080') h1.request("POST", "http://localhost:8080/my", body, headers) print h1.getresponse() body = the contents of the example.zip, and headers = empty dictionary. This request eventually timed out without printing anything, which I think exonerates curl from being the issue, so I believe something is going on in web.py which isn't quite right (or at least not sufficiently clear) Any web.py experts got some tips? Cheers, Richard

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • When should I be cautious using about data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • Getting data from array of DataSet objects returned from web service

    - by Sarah Vessels
    I have a web service that I want to access when it is added as a web reference to my C# project. A particular method in the web service takes a SQL query string and returns the results of the query as a custom type. When I add the web service reference, the method shows up as returning DataSet[] instead of the custom type. This is fine provided I can still somehow access the data returned from the query within those DataSet objects. I ran a particular query that should return 6 rows; I got back a DataSet[] array with 6 elements. However, when I iterate over those DataSet objects, none of them has any tables (via the Tables property on the DataSet). What gives? Where is my data? The web service is tested and works when I use it as a data source in a Report Builder 2.0 report. I am able to send an XML SOAP query to the web service and get back XML results containing my data.

    Read the article

  • How to access this data in PHP?

    - by George Edison
    Okay. Now I give up. I have been playing with this for hours. I have a variable name $data. The variable contains these contents: array ( 'headers' => array ( 'content-type' => 'multipart/alternative; boundary="_689e1a7d-7a0a-442a-bd6c-a1fb1dc2993e_"', ), 'ctype_parameters' => array ( 'boundary' => '_689e1a7d-7a0a-442a-bd6c-a1fb1dc2993e_', ), 'parts' => array ( 0 => stdClass::__set_state(array( 'headers' => array ( 'content-type' => 'text/plain; charset="iso-8859-1"', 'content-transfer-encoding' => 'quoted-printable', ), 'ctype_primary' => 'text', )), ), ) I removed some non-essential data. I want to access the headers value (on the second line above) - simple: $data->headers I want to access the headers value (on the fourteenth line after the stdClass:: stuff) - how? How can I possibly access the values within the stdClass::__set_state section? I tried var_export($data->parts); but all I get is NULL

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • Sweave/R - Automatically generating an appendix that contains all the model summaries/plots/data pro

    - by John Horton
    I like the idea of making research available at multiple levels of detail i.e., abstract for the casually curious, full text for the more interested, and finally the data and code for those working in the same area/trying to reproduce your results. In between the actual text and the data/code level, I'd like to insert another layer. Namely, I'd like to create a kind of automatically generated appendix that contains the full regression output, diagnostic plots, exploratory graphs data profiles etc. from the analysis, regardless of whether those plots/regressions etc. made it into the final paper. One idea I had was to write a script that would examine the .Rnw file and automatically: Profile all data sets that are loaded (sort of like the Hmisc(?) package) Summarize all regressions - i.e., run summary(model) for all models Present all plots (regardless of whether they made it in the final version) The idea is to make this kind of a low-effort, push-button sort of thing as opposed to a formal appendix written like the rest of a paper. What I'm looking for is some ideas on how to do this in R in a relatively simple way. My hunch is that there is some way of going through the namespace, figuring out what something is and then dumping into a PDF. Thoughts? Does something like this already exist?

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • Aggregating, restructuring hourly time series data in R

    - by Advait Godbole
    I have a year's worth of hourly data in a data frame in R: > str(df.MHwind_load) # compactly displays structure of data frame 'data.frame': 8760 obs. of 6 variables: $ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ... $ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ... $ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ... $ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ... $ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ... While preserving the hourly structure, I would like to know how to extract a particular month/group of months the first day/first week etc of each month all mondays, all tuesdays etc of the year I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • WPF tree data binding

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >