Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 698/761 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Can't Set Crystal Report Selection Formula Programatically

    - by eidylon
    Hello all, first, I can't stand Crystal! Okay, that's off my chest... Now, we have an old VB6 app we maintain for a client, which uses the Crystal Automation library to programatically change the record selection formulas in a bunch of Crystal Reports 8.5 reports. There are two reports which are ALMOST identical. I had to change them recently to add another field from another table. When I added the table in to the reports though, while it added it in the visual designer, it did not add it in the FROM clause of the SQL statement. So, I manually edited the SQL statement to add in the additional join. KO, works great. If i run the reports in Crystal preview mode, they work exactly as expected. Now, the users went to test the changes from within the VB app. One of the reports works fine and dandy. The other report however, is failing to set the selection formula as expected. The code sets the selection formulas using the function PESetSelectionFormula. I verified that the string being passed in to the function as the new selection formula is correct via a step-through examination of the variables. The call to PESetSelectionFormula seems to be working okay, and is returning a value of 1, which as near as I can find anywhere indicates success. (The other report, which is working fine from code is also returning 1.) However, the report is failing with an error: Error Code: 534 - Error detected by database DLL. The code, for debugging purposes dumps out the SQL string currently being used by the report. The SQL coming out of the report is: SELECT ... FROM ... WHERE ORDER BY ... As you can see, the WHERE clause is blank, which I would imagine is why the database DLL is upchucking on this statement. I don't understand why the automation library is not setting the WHERE clause even though the call to PESetSelectionFormula is being passed a valid string and is returning success. I thought perhaps it was because I had manually edited the SQL in the report to add the table it wasn't adding, but I did the same thing in the other nearly identical report, and that one is working fine. Anyone have any ideas why PESetSelectionFormula might report success but not actually do anything? P.S. I have already tried doing a Database Verify Database from the menus, and that said the report was all up to date and did not help at all.

    Read the article

  • Diophantine Equation [closed]

    - by ANIL
    In mathematics, a Diophantine equation (named for Diophantus of Alexandria, a third century Greek mathematician) is a polynomial equation where the variables can only take on integer values. Although you may not realize it, you have seen Diophantine equations before: one of the most famous Diophantine equations is: X^n+Y^n=Z^n We are not certain that McDonald's knows about Diophantine equations (actually we doubt that they do), but they use them! McDonald's sells Chicken McNuggets in packages of 6, 9 or 20 McNuggets. Thus, it is possible, for example, to buy exactly 15 McNuggets (with one package of 6 and a second package of 9), but it is not possible to buy exactly 16 nuggets, since no non- negative integer combination of 6's, 9's and 20's adds up to 16. To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n. Problem 1 Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65. Problem 2 Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n" Hints: Hypothesize possible instances of numbers of McNuggets that cannot be purchased exactly, starting with 1 For each possible instance, called n, a. Test if there exists non-negative integers a, b, and c, such that 6a+9b+20c = n. (This can be done by looking at all feasible combinations of a, b, and c) b. If not, n cannot be bought in exact quantity, save n When you have found six consecutive values of n that in fact pass the test of having an exact solution, the last answer that was saved (not the last value of n that had a solution) is the correct answer, since you know by the theorem that any amount larger can also be bought in exact quantity

    Read the article

  • timeIntervalSinceDate Accuracy

    - by mmccomb
    I've been working on a game with an engine that updates 20 times per seconds. I've got to point now where I want to start getting some performance figures and tweak the rendering and logic updates. In order to do so I started to add some timing code to my game loop, implemented as follows... NSDate* startTime = [NSDate date]; // Game update logic here.... // Also timing of smaller internal events NSDate* endTime = [NSDate date]; [endTime timeIntervalSinceDate:startTime]; I noticed however that when I timed blocks within the outer timing logic that the time they took to execute did not sum up to match the overall time taken. So I wrote a small unit test to demonstrate the problem in which I time the overall time taken to complete the test and then 10 smaller events, here it is... - (void)testThatSumOfTimingsMatchesOverallTiming { NSDate* startOfOverallTime = [NSDate date]; // Variable to hold summation of smaller timing events in the upcoming loop... float sumOfIndividualTimes = 0.0; NSTimeInterval times[10] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}; for (int i = 0; i < 10; i++) { NSDate* startOfIndividualTime = [NSDate date]; // Kill some time... sleep(1); NSDate* endOfIndividualTime = [NSDate date]; times[i] = [endOfIndividualTime timeIntervalSinceDate:startOfIndividualTime]; sumOfIndividualTimes += times[i]; } NSDate* endOfOverallTime = [NSDate date]; NSTimeInterval overallTimeTaken = [endOfOverallTime timeIntervalSinceDate:startOfOverallTime]; NSLog(@"Sum of individual times: %fms", sumOfIndividualTimes); NSLog(@"Overall time: %fms", overallTimeTaken); STAssertFalse(TRUE, @""); } And here's the output... Sum of individual times: 10.001377ms Overall time: 10.016834ms Which illustrates my problem quite clearly. The overall time was 0.000012ms but the smaller events took only 0.000001ms. So what happened to the other 0.000011ms? Is there anything that looks particularly wrong with my code? Or is there an alternative timing mechanism I should use?

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • Any way to simulate MouseOver in WPF

    - by jpierson
    I'm working on a link control in WPF which fits the text with icon links case in the Windows UX Guide. What I want is to have some text within a hyperlink that appears to the right of some image. In my case I started off by using a TextBlock that contained a Hyperlink which then contained my image and some text. <TextBlock> <Hyperlink> <Rectangle Height="16" Width="16" Fill="{StaticResource MyIconBrush}" Stretch="UniformToFill" VerticalAlignment="Center" HorizontalAlignment="Left" /> <Run>My link text</Run> </Hyperlink> </TextBlock> The problem with this however was that the image being taller than my text produced an effect where the text was aligned to the bottom. Unfortunately I haven't found any way to control the vertical alignment within the TextBlock or within the Hyperlink so I've resorted to attempting an alternative layout where the Hyperlink and the Rectangle that represent my vector icon are separated in order to get them to align properly like shown below. <TextBlock> <Hyperlink> <StackPanel Orientation="Horizontal"> <Rectangle Height="16" Width="16" Fill="{StaticResource MyIconBrush}" Stretch="UniformToFill" VerticalAlignment="Center" HorizontalAlignment="Left" /> <TextBlock VerticalAlignment="Center"><Hyperlink>My link text<Hyperlink></TextBlock> </StackPanel> </Hyperlink> </TextBlock> The problem with this however is that now that my Icon and my Hyperlink are separated I don't get my MouseOver appearance of my link when I the mouse is over my icon and vise-versa. So this got me to thinking, how do I simulate MouseOver for a given control such with a checkbox where you get the MouseOver effect on the box when you actually mouse over it's associated text. I know in the HTML world the label element has a for attribute that can be used to specify which control it is labeling which will basically do what I'm looking for. Also I can imagine that in other scenarios it may be nice to have a label that when you mouse over shows a corresponding text box as if the mouse is over it and possibly when clicked focus is given to the corresponding text box as well. For now though I'm interested mainly in how to to get a label or label like element in WPF to act as a proxy for a given control in terms of it's MouseOver state. Also I would like to do this purely in XAML if possible.

    Read the article

  • Sticky Footer Dilemma - so close

    - by fmz
    I have an effective sticky footer solution, but I have a slight conflict because of the need for a background image in addition to the repeating banner image across the top. In order to make the large watermark type image show up on this site I added the following div: <div id="body-outer"> The only problem is that once I add this div tag, the footer climbs up the page, overriding all content in its path. If I remove that div, the footer is as sticky as one could hope. Is there some way to have the background image and the sticky footer too? Here is the basic html structure: <body class="home"> <div id="body-outer"> <div id="container"> <div id="header"> ... <div id="footer"> <p>Placeholder for footer content</p> </div> Here is the css for the sticky footer and the background image: #body-outer { background: url(../_images/bg_body.jpg) no-repeat center top; height: 100%; } body { width: 100%; background: url(../_images/bg_html.jpg) repeat-x left top; } #container { width: 960px; margin: 0 auto; } html, body, #container { height: 100%; } body > #container { height: auto; min-height: 100%; padding-bottom: 140px; } #main { padding-bottom: 0; } #footer { position: relative; width: 100%; background-color: #4e8997; margin-top: ; /* negative value of footer height */ height: 140px; clear:both; } .clearfix:after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } .clearfix {display: inline-block;} * html .clearfix { height: 1%;} Any assistance in sorting this out would be greatly appreciated. thanks.

    Read the article

  • Most efficient algorithm for merging sorted IEnumerable<T>

    - by franck
    Hello, I have several huge sorted enumerable sequences that I want to merge. Theses lists are manipulated as IEnumerable but are already sorted. Since input lists are sorted, it should be possible to merge them in one trip, without re-sorting anything. I would like to keep the defered execution behavior. I tried to write a naive algorithm which do that (see below). However, it looks pretty ugly and I'm sure it can be optimized. It may exist a more academical algorithm... IEnumerable<T> MergeOrderedLists<T, TOrder>(IEnumerable<IEnumerable<T>> orderedlists, Func<T, TOrder> orderBy) { var enumerators = orderedlists.ToDictionary(l => l.GetEnumerator(), l => default(T)); IEnumerator<T> tag = null; var firstRun = true; while (true) { var toRemove = new List<IEnumerator<T>>(); var toAdd = new List<KeyValuePair<IEnumerator<T>, T>>(); foreach (var pair in enumerators.Where(pair => firstRun || tag == pair.Key)) { if (pair.Key.MoveNext()) toAdd.Add(pair); else toRemove.Add(pair.Key); } foreach (var enumerator in toRemove) enumerators.Remove(enumerator); foreach (var pair in toAdd) enumerators[pair.Key] = pair.Key.Current; if (enumerators.Count == 0) yield break; var min = enumerators.OrderBy(t => orderBy(t.Value)).FirstOrDefault(); tag = min.Key; yield return min.Value; firstRun = false; } } The method can be used like that: // Person lists are already sorted by age MergeOrderedLists(orderedList, p => p.Age); assuming the following Person class exists somewhere: public class Person { public int Age { get; set; } } Duplicates should be conserved, we don't care about their order in the new sequence. Do you see any obvious optimization I could use?

    Read the article

  • Sockets server design advice

    - by Rob
    We are writing a socket server in c# and need some advice on the design. Background: Clients (from mobile devices) connect to our server app and we leave their socket open so we can send data back down to them whenever we need to. The amount of data varies but we generally send/receive data from each client every few seconds, so it's quite intensive. The amount of simultaneous connections can range from 50-500 (and more in the future). We have already written a server app using async sockets and it works, however we've come across some stumbling blocks and we need to make sure that what we're doing is correct. We have a collection which holds our client states (we have no socket/connection pool at the moment, should we?). Each time a client connects we create a socket and then wait for them to send us some data and in receiveCallBack we add their clientstate object to our connections dictionary (once we have verified who they are). When a client object then signs off we shutdown their socket and then close it as well as remove them from our collection of clients dictionary. Presumably everything happens in the right order, everything works as expected. However, almost everyday it stops accepting connections, or so we think, either that or it connects but doesn't actually do anything past that and we can't work out why it's just stopping. There are few things that we'r'e unsure about 1) Should we be creating some kind of connection pool as opposed to just a dictionary of client sockets 2) What happens to the sockets that connect but then don't get added to our dictionary, they just linger around in memory doing nothing, should we create ANOTHER dictionary that holds the sockets as soon as they are created? 3) What's the best way of finding if clients are no longer connected? We've read some many methods but we're not sure of the best one to use, send data or read data, if so how? 4) If we loop through the connections dictonary to check for disposed clients, should we be locking the dictionary, if so how does this affect other clients objects trying to use it at the same time, will it throw an error or just wait? 5) We often get disposedSocketException within ReceiveCallBack method at random times, does this mean we are safe to remove that socket from the collection? We can't seem to find any production type examples which show any of this working. Any advice would be greatly received

    Read the article

  • What should I do or don't do to avoid Delphi "push dword" bug.

    - by Maksee
    I found that Delphi 5 generates invalid assembly code in specific cases. I can't understand in what cases in general. The example below produces access violation since a very strange optimization occurs. For a byte in a record or array Delphi generates push dword [...], pop ebx, mov .., bl that works correctly if there are data after this byte (we need at least three to push dword correctly), but fails if the data is inaccessible. I emulated the strict boundaries here with win32 Virtual* functions Specifically the error occurs when the last byte from the block accessed inside FeedBytesToClass procedure. And if I try to change something like using data array instead of object property of remove actionFlag variable, Delphi generates correct assembly instructions. const BlockSize = 4096; type TSomeClass = class private fBytes: PByteArray; public property Bytes: PByteArray read fBytes; constructor Create; destructor Destroy;override; end; constructor TSomeClass.Create; begin inherited Create; GetMem(fBytes, BlockSize); end; destructor TSomeClass.Destroy; begin FreeMem(fBytes); inherited; end; procedure FeedBytesToClass(SrcDataBytes: PByteArray; Count: integer); var j: integer; Ofs: integer; actionFlag: boolean; AClass: TSomeClass; begin AClass:=TSomeClass.Create; try actionFlag:=true; for j:=0 to Count-1 do begin Ofs:=j; if actionFlag then begin AClass.Bytes[Ofs]:=SrcDataBytes[j]; end; end; finally AClass.Free; end; end; procedure TForm31.Button1Click(Sender: TObject); var SrcDataBytes: PByteArray; begin SrcDataBytes:=VirtualAlloc(Nil, BlockSize, MEM_COMMIT, PAGE_READWRITE); try if VirtualLock(SrcDataBytes, BlockSize) then try FeedBytesToClass(SrcDataBytes, BlockSize); finally VirtualUnLock(SrcDataBytes, BlockSize); end; finally VirtualFree(SrcDataBytes, MEM_DECOMMIT, BlockSize); end; end; Initially the error occured when I used access to RGB data of bitmap bits, but the code there is too complex so I narrowed it to this fragment. So the question is what is here so specific that makes Delphi produce push,pop,mov optimization. I need to know this in order to avoid such side effects in general.

    Read the article

  • When is it worth using a BindingSource?

    - by Justin
    I think I understand well enough what the BindingSource class does - i.e. provide a layer of indirection between a data source and a UI control. It implements the IBindingList interface and therefore also provides support for sorting. And I've used it frequently enough, without too many problems. But I'm wondering if I use it more often than I should. Perhaps an example would help. Let's say I have just a simple textbox on a form (using WinForms), and I'd like to bind that textbox to a simple property inside a class that returns a string. Is it worth using a BindingSource in this situation? Now let's say I have a grid on my form, and I'd like to bind it to a DataTable. Should I use a BindingSource now? In the latter case, I probably would not use a BindingSource, as a DataTable, from what I can gather, provides the same functionality that the BindingSource itself would. The DataTable will fire the the right events when a row is added, deleted, etc so that the grid will automatically update. But in the first case with the textbox being bound to a string, I would probably have the class that contains the string property implement INotifyPropertyChanged, so that it could fire the PropertyChanged event when the string changes. I would use a BindingSource so that it could listen to these PropertyChanged events so that it could update the textbox automatically when the string changes. How does this sound so far? I still feel like there's a gap in my understanding that's preventing me from seeing the whole picture. This has been a pretty vague question so far, so I'll try to ask some more specific questions - ideally the answers will reference the above examples or something similar... (1) Is it worth using a BindingSource in either of the above examples? (2) It seems that developers just "assume" that the DataTable class will do the right thing, in firing PropertyChanged events at the right time. How does one know if a data source is capable of doing this? Is there a particular interface that a data source should implement in order for developers to be able to assume this behaviour? (3) Does it matter what Control is being bound to, when considering whether or not to use a BindingSource? Or is it only the data source that should affect the decision? Perhaps the answer is (and this would seem logical enough): the Control needs to be intelligent enough to listen to the PropertyChanged events, otherwise a BindingSource is required. So how does one tell if the Control is capable of doing this? Again, is there a particular interface that developers can look for that the Control must implement? It is this confusion that has, in the past, led to me always using a BindingSource. But I'd like to understand better exactly when to use one, so that I do so only when necessary.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL (Question updated)

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • SQL server timeout 2000 from C# .NET

    - by Johnny Egeland
    I have run into a strange problem using SQL Server 2000 and two linked server. For two years now our solution has run without a hitch, but suddenly yesterday a query synchronizing data from one of the databases to the other started timing out. I connect to a server in the production network, which is linked to a server containing orders I need data from. The query contains a few joins, but basically this summarizes what is done: INSERT INTO ProductionDataCache (column1, column2, ...) SELECT tab1.column1, tab1.column2, tab2.column1, tab3.column1 ... FROM linkedserver.database.dbo.Table1 AS tab1 JOIN linkedserver.database.dbo.Table2 AS tab2 ON (...) JOIN linkedserver.database.dbo.Tabl32 AS tab3 ON (...) ... WHERE tab1.productionOrderId = @id ORDER BY ... Obviously my first attempt to fix the problem was to increase the timeout limit from the original 5 minutes. But when I arrived at 30 minutes and still got a timeout, I started to suspect something else was going on. A query just does not go from executing in less than 5 minutes to over 30 minutes over night. I outputted the SQL query (which was originally in the C# code) to my logs, and decided to execute the query in the Query Analyzer directly on the database server. To my big surprise, the query executed correctly in less than 10 seconds. So I isolated the SQL execution in a simple test program, and observed the same query time out both on the server originally running this solution AND when running it locally on the database server. Also I have tried to create a Stored Procedure and execute this from the program, but this also times out. Running it in Query Analyzer works fine in less than a few seconds. It seems that the problem only occurs when I execute this query from the C# program. Has anyone seen such behavior before, and found a solution for it? UPDATE: I have now used SQL Profiler on the server. The obvious difference is that when executing the query from the .NET program, it shows up in the log as "exec sp_executesql N'INSERT INTO ...'", but when executing from Query Analyzer it occurs as a normal query in the log. Further I tried to connect the SQL Query Analyzer using the same SQL user as the program, and this triggered the problem in Query Analyzer as well. So it seems the problem only occurs when connecting via TCP/IP using a sql user.

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • Managing a difficult manager

    - by griegs
    I have a situation here at work. We are redeveloping our basic architecture across the entire company. Currently we have the following hierarchy; SQL Database <= Stored Procs not allowed. nHibernate Classes to convert nHibernate into our own objects Web Service <= for all external and [internal] calls. Class to take objects from Web Service and back into our own objects and then… Normal nTier application architecture such as Data Transformation Layer, Business layer etc. Within the database, when we are writing a hierarchy of objects to the database, say for example; Order Person Details Address Product Other We need to serialise the object and save it, in its entirety, to an image field in a table. No attempt has been made to store the objects in their own tables so that we can do useful stuff like report on it. This is an architecture that was implemented [way] before I started and as you can probably appreciate, is a complete nightmare not to mention slow as a wet weekend. We’re not even allowed to have stored procs within SQL server because in my boss’s last job they had a hundred or so and he had a problem identifying them all so therefore all stored procs are the devil. Now the same person that developed the above architecture has developed the new one. It came as no surprise that he’s essentially used the same framework only now it’s using DotNet 3.5 with interfaces and generics. We still have to go through web services, still need to serialise (everything), still not allowed to use stored procs etc. In fact, we’re only barely able to bang two rocks together here. He says to us that the framework is open for discussion but when you discuss it, unless you approve of his design, you are told flatly “No”. He simply won’t listen to any other suggestions. Even when you show him demo applications of his proposed architecture v’s yours and he can see the speed difference, he still won’t take that on board. So I guess my question is, and I know others have experienced the same things out there, how do I get through to someone like this? How do you convince someone to ditch Web Services for internal calls and applications? How do you demonstrate, and make it stick, that stored procs are a better way to go than ad-hoc sql statements? This is killing me. I don’t want to repeat the mistakes of the past and I certainly don’t want to write code that I know is going to be slow and cumbersome. Help!

    Read the article

  • ArrayIndexOutOfBoundsException with custom Android Adapter for multiple views in ListView

    - by Dan Watling
    I am attempting to create a custom Adapter for my ListView since each item in the list can have a different view (a link, toggle, or radio group), but when I try to run the Activity that uses the ListView I receive an error and the app stops. The application is targeted for the Android 1.6 platform. The code: public class MenuListAdapter extends BaseAdapter { private static final String LOG_KEY = MenuListAdapter.class.getSimpleName(); protected List<MenuItem> list; protected Context ctx; protected LayoutInflater inflater; public MenuListAdapter(Context context, List<MenuItem> objects) { this.list = objects; this.ctx = context; this.inflater = (LayoutInflater)this.ctx.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } @Override public View getView(int position, View convertView, ViewGroup parent) { Log.i(LOG_KEY, "Position: " + position + "; convertView = " + convertView + "; parent=" + parent); MenuItem item = list.get(position); Log.i(LOG_KEY, "Item=" + item ); if (convertView == null) { convertView = this.inflater.inflate(item.getLayout(), null); } return convertView; } @Override public boolean areAllItemsEnabled() { return false; } @Override public boolean isEnabled(int position) { return true; } @Override public int getCount() { return this.list.size(); } @Override public MenuItem getItem(int position) { return this.list.get(position); } @Override public long getItemId(int position) { return position; } @Override public int getItemViewType(int position) { Log.i(LOG_KEY, "getItemViewType: " + this.list.get(position).getLayout()); return this.list.get(position).getLayout(); } @Override public int getViewTypeCount() { Log.i(LOG_KEY, "getViewTypeCount: " + this.list.size()); return this.list.size(); } } The error I receive: java.lang.ArrayIndexOutOfBoundsException at android.widget.AbsListView$RecycleBin.addScrapView(AbsListView.java:3523) at android.widget.ListView.measureHeightOfChildren(ListView.java:1158) at android.widget.ListView.onMeasure(ListView.java:1060) at android.view.View.measure(View.java:7703) I do know that the application is returning from getView and everything seems in order. Any ideas on what could be causing this would be appreciated. Thanks, -Dan

    Read the article

  • algorithm optimization: returning object and sub-objects from single SQL statement in PHP

    - by pocketfullofcheese
    I have an object oriented PHP application. I have a simple hierarchy stored in an SQL table (Chapters and Authors that can be assigned to a chapter). I wrote the following method to fetch the chapters and the authors in a single query and then loop through the result, figuring out which rows belong to the same chapter and creating both Chapter objects and arrays of Author objects. However, I feel like this can be made a lot neater. Can someone help? function getChaptersWithAuthors($monographId, $rangeInfo = null) { $result =& $this->retrieveRange( 'SELECT mc.chapter_id, mc.monograph_id, mc.chapter_seq, ma.author_id, ma.monograph_id, mca.primary_contact, mca.seq, ma.first_name, ma.middle_name, ma.last_name, ma.affiliation, ma.country, ma.email, ma.url FROM monograph_chapters mc LEFT JOIN monograph_chapter_authors mca ON mc.chapter_id = mca.chapter_id LEFT JOIN monograph_authors ma ON ma.author_id = mca.author_id WHERE mc.monograph_id = ? ORDER BY mc.chapter_seq, mca.seq', $monographId, $rangeInfo ); $chapterAuthorDao =& DAORegistry::getDAO('ChapterAuthorDAO'); $chapters = array(); $authors = array(); while (!$result->EOF) { $row = $result->GetRowAssoc(false); // initialize $currentChapterId for the first row if ( !isset($currentChapterId) ) $currentChapterId = $row['chapter_id']; if ( $row['chapter_id'] != $currentChapterId) { // we're on a new row. create a chapter from the previous one $chapter =& $this->_returnFromRow($prevRow); // set the authors with all the authors found so far $chapter->setAuthors($authors); // clear the authors array unset($authors); $authors = array(); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; // set the current id for this row $currentChapterId = $row['chapter_id']; } // add every author to the authors array if ( $row['author_id'] ) $authors[$row['author_id']] =& $chapterAuthorDao->_returnFromRow($row); // keep a copy of the previous row for creating the chapter once we're on a new chapter row $prevRow = $row; $result->MoveNext(); if ( $result->EOF ) { // The result set is at the end $chapter =& $this->_returnFromRow($row); // set the authors with all the authors found so far $chapter->setAuthors($authors); unset($authors); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; } } $result->Close(); unset($result); return $chapters; } PS: the _returnFromRow methods simply construct an Chapter or Author object given the SQL row. If needed, I can post those methods here.

    Read the article

  • Wiki-fying a text using LPeg

    - by Stigma
    Long story coming up, but I'll try to keep it brief. I have many pure-text paragraphs which I extract from a system and re-output in wiki format so that the copying of said data is not such an arduous task. This all goes really well, except that there are no automatic references being generated for the 'topics' we have pages for, which end up needing to be added by reading through all the text and adding it in manually by changing Topic to [[Topic]]. First requirement: each topic is only to be made clickable once, which is the first occurrence. Otherwise, it would become a really spammy linkfest, which would detract from readability. To avoid issues with topics that start with the same words Second requirement: overlapping topic names should be handled in such a way that the most 'precise' topic gets the link, and in later occurrences, the less precise topics do not get linked, since they're likely not correct. Example: topics = { "Project", "Mary", "Mr. Moore", "Project Omega"} input = "Mary and Mr. Moore work together on Project Omega. Mr. Moore hates both Mary and Project Omega, but Mary simply loves the Project." output = function_to_be_written(input) -- "[[Mary]] and [[Mr. Moore]] work together on [[Project Omega]]. Mr. Moore hates both Mary and Project Omega, but Mary simply loves the [[Project]]." Now, I quickly figured out a simple or complicated string.gsub() could not get me what I need to satisfy the second requirement, as it provides no way to say 'Consider this match as if it did not happen - I want you to backtrack further'. I need the engine to do something akin to: input = "abc def ghi" -- Looping over the input would, in this order, match the following strings: -- 1) abc def ghi -- 2) abc def -- 3) abc -- 4) def ghi -- 5) def -- 6) ghi Once a string matches an actual topic and has not been replaced before by its wikified version, it is replaced. If this topic has been replaced by a wikified version before, don't replace, but simply continue the matching at the end of the topic. (So for a topic "abc def", it would test "ghi" next in both cases.) Thus I arrive at LPeg. I have read up on it, played with it, but it is considerably complex, and while I think I need to use lpeg.Cmt and lpeg.Cs somehow, I am unable to mix the two properly to make what I want to do work. I am refraining from posting my practice attempts as they are of miserable quality and probably more likely to confuse anyone than assist in clarifying my problem. (Why do I want to use a PEG instead of writing a triple-nested loop myself? Because I don't want to, and it is a great excuse to learn PEGs.. except that I am in over my head a bit. Unless it is not possible with LPeg, the first is not an option.)

    Read the article

  • Lucene.NET search index approach

    - by Tim Peel
    Hi, I am trying to put together a test case for using Lucene.NET on one of our websites. I'd like to do the following: Index in a single unique id. Index across a comma delimitered string of terms or tags. For example. Item 1: Id = 1 Tags = Something,Separated-Term I will then be structuring the search so I can look for documents against tag i.e. tags:something OR tags:separate-term I need to maintain the exact term value in order to search against it. I have something running, and the search query is being parsed as expected, but I am not seeing any results. Here's some code. My parser (_luceneAnalyzer is passed into my indexing service): var parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_CURRENT, "Tags", _luceneAnalyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); return parser; My Lucene.NET document creation: var doc = new Document(); var id = new Field( "Id", NumericUtils.IntToPrefixCoded(indexObject.id), Field.Store.YES, Field.Index.NOT_ANALYZED, Field.TermVector.NO); var tags = new Field( "Tags", string.Join(",", indexObject.Tags.ToArray()), Field.Store.NO, Field.Index.ANALYZED, Field.TermVector.YES); doc.Add(id); doc.Add(tags); return doc; My search: var parser = BuildQueryParser(); var query = parser.Parse(searchQuery); var searcher = Searcher; TopDocs hits = searcher.Search(query, null, max); IList<SearchResult> result = new List<SearchResult>(); float scoreNorm = 1.0f / hits.GetMaxScore(); for (int i = 0; i < hits.scoreDocs.Length; i++) { float score = hits.scoreDocs[i].score * scoreNorm; result.Add(CreateSearchResult(searcher.Doc(hits.scoreDocs[i].doc), score)); } return result; I have two documents in my index, one with the tag "Something" and one with the tags "Something" and "Separated-Term". It's important for the - to remain in the terms as I want an exact match on the full value. When I search with "tags:Something" I do not get any results. Question What Analyzer should I be using to achieve the search index I am after? Are there any pointers for putting together a search such as this? Why is my current search not returning any results? Many thanks

    Read the article

  • Create a screen like the iPhone home screen with Scrollview and Buttons

    - by Anthony Chan
    Hi, I'm working on a project and need to create a screen similar to the iPhone home screen: A scrollview with multiple pages A bunch of icons When not in edit mode, swipe through different pages (even I started the touch on an icon) When not in edit mode, tap an icon to do something When in edit mode, drag the icon to swap places, and even swap to different pages When in edit mode, tap an icon to remove it Previously I read from several forums that I have to subclass UIScrollview in order to have touch input for the UIViews on top of it. So I subclassed it overriding the methods to handle touches: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //If not dragging, send event to next responder if (!self.dragging) [self.nextResponder touchesBegan:touches withEvent:event]; else [super touchesBegan:touches withEvent:event]; } In general I've override the touchesBegan:, touchesMoved: and touchesEnded: methods similarly. Then in the view controller, I added to following code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; UIView *hitView = (UIView *)touch.view; if ([hitView isKindOfClass:[UIView class]]) { [hitView doSomething]; NSLog(@"touchesBegan"); } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { // Some codes to move the icons NSLog(@"touchesMoved"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesEnded"); } When I run the app, I have the touchesBegan method detected correctly. However, when I tried to drag the icon, the icon just moved a tiny bit and then the page started to scroll. In console, it logged with 2 or 3 "touchesMoved" message only. However, I learned from another project that it should logged tonnes of "touchesMoved" message as long as I'm still dragging on the screen. (I'm suspecting I have the delaysContentTouches set to YES, so it delays a little bit when I tried to drag the icons. After that minor delay, it sends to signal back to the scrollview to scroll through the page. Please correct me if I'm wrong.) So if any help on the code to perform the above tasks would be greatly appreciated. I've stuck in this place for nearly a week with no hope. Thanks a lot.

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • Creating an XML sitemap with PHP

    - by iMaster
    I'm trying to create a sitemap that will automatically update. I've done something similiar with my RSS feed, but this sitemap refuses to work. You can view it live at http://designdeluge.com/sitemap.xml I think the main problem is that its not recognizing the PHP code. Here's the full source: <?php header('Content-type: text/xml'); ?> <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.google.com/schemas/sitemap/0.84" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84 http://www.google.com/schemas/sitemap/0.84/sitemap.xsd"> <url> <loc>http://www.designdeluge.com/</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>1.00</priority> </url> <url> <loc>http://www.designdeluge.com/archives</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <url> <loc>http://www.designdeluge.com/about.php</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <?php include 'connection.php'; $entries = mysql_query("SELECT * FROM Entries ORDER BY timestamp DESC"); while($row = mysql_fetch_array($entries)) { $title = stripslashes($row['title']); $date = date("Y-m-d", strtotime($row['timestamp'])); ?> <url> <loc>http://www.designdeluge.com/<?php echo $row['title']; ?></loc> <lastmod><?php echo $date; ?></lastmod> <changefreq>none</changefreq> <priority>0.5</priority> </url> <?php } ?> </urlset> The problem is that the dynamic URL's (e.g. the ones pulled from the DB) aren't being generated and the sitemap won't validate. Thanks!

    Read the article

  • Difficulties getting GraphViz working as a library in C++

    - by DistortedLojik
    Am working on a program that will allow a graph of nodes to be displayed and then updated visually as the nodes themselves are updated. I am fairly new to Visual Studio 2010 and am following the GraphViz guide located at http://www.graphviz.org/pdf/libguide.pdf in order to get GraphViz working as a library. I have the following code which is taken straight from the pdf linked above. #include <graphviz\gvc.h> #include <graphviz\cdt.h> #include <graphviz\graph.h> #include <graphviz\pathplan.h> using namespace std; int main(int argc, char **argv) { Agraph_t *g; Agnode_t *n, *m; Agedge_t *e; Agsym_t *a; GVC_t *gvc; /* set up a graphviz context */ gvc = gvContext(); /* parse command line args - minimally argv[0] sets layout engine */ gvParseArgs(gvc, argc, argv); /* Create a simple digraph */ g = agopen("g", AGDIGRAPH); n = agnode(g, "n"); m = agnode(g, "m"); e = agedge(g, n, m); /* Set an attribute - in this case one that affects the visible rendering */ agsafeset(n, "color", "red", ""); /* Compute a layout using layout engine from command line args */ gvLayoutJobs(gvc, g); /* Write the graph according to -T and -o options */ gvRenderJobs(gvc, g); /* Free layout data */ gvFreeLayout(gvc, g); /* Free graph structures */ agclose(g); /* close output file, free context, and return number of errors */ return (gvFreeContext(gvc)); } After compiling I get the following errors which indicate that I do not have it correctly linked. 1>main.obj : error LNK2019: unresolved external symbol _gvFreeContext referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _agclose referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _gvFreeLayout referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _gvRenderJobs referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _gvLayoutJobs referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _agsafeset referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _agedge referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _agnode referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _agopen referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _gvParseArgs referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol _gvContext referenced in function _main Within the VC++ Directories I have C:\Program Files (x86)\Graphviz2.26.3\include in the Include Directories and C:\Program Files (x86)\Graphviz2.26.3\lib\release\lib in the Library Directories Any help would be greatly appreciated to help get this working. Thank you.

    Read the article

  • Scared of Calculus - Required to pass Differential Calculus as part of my Computer science major

    - by ke3pup
    Hi guys I'm finishing my Computer science degree in university but my fear of maths (lack of background knowledge) made me to leave all my maths units til' the very end which is now. i either take them on and pass or have to give up. I've passed all my programming units easily but knowing my poor maths skills won't do i've been staying clear of the maths units. I have to pass Differential Calculus and Linear Algebra first. With a help of book named "Linear Algebra: A Modern Introduction" i'm finding myself on track and i think i can pass the Linear Algebra unit. But with differential calculus i can't find a book to help me. They're either too advanced or just too simple for what i have to learn. The things i'm required to know for this units are: Set notation, the real number line, Complex numbers in cartesian form. Complex plane, modulus. Complex numbers in polar form. De Moivre’s Theorem. Complex powers and nth roots. Definition of ei? and ez for z complex. Applications to trigonometry. Revision of domain and range of a function Working in R3. Curves and surfaces. Functions of 2 variables. Level curves.Partial derivatives and tangent planes. The derivative as a difference quotient. Geometric significance of the derivative. Discussion of limit. Higher order partial derivatives. Limits of f(x,y). Continuity. Maxima and minima of f(x,y). The chain rule. Implicit differentiation. Directional derivatives and the gradient. Limit laws, l’Hoˆpital’s rule, composition law. Definition of sinh and cosh and their inverses. Taylor polynomials. The remainder term. Taylor series. Is there a book to help me get on track with the above? Being a student i can't buy too many books hence why i'm looking for a book that covers topics I need to know. The University library has a fairly limited collection which i took as loan but didn't find useful as it was too complex.

    Read the article

  • Flex+JPA/Hibernate+BlazeDS+MySQL how to debug this monster?!

    - by Zenzen
    Ok so I'm making a "simple" web app using the technologies from the topic, recently I found http://www.adobe.com/devnet/flex/articles/flex_hibernate.html so I'm following it and I try to apply it to my app, the only difference being I'm working on a Mac and I'm using MAMP for the database (so no command line for me). The thing is I'm having some trouble with retrieving/connecting to the database. I have the remoting-config.xml, persistance.xml, a News.java class (my Entity), a NewsService.java class, a News.as class - all just like in the tutorial. I have of course this line in one of my .mxmls: <mx:RemoteObject id="loaderService" destination="newsService" result="handleLoadResult(event)" fault="handleFault(event)" showBusyCursor="true" /> And my remoting-config.xml looks like this (well part of it): <destination id="newsService"> <properties><source>com.gamelist.news.NewsService</source></properties> </destination> NewsService has a method: public List<News> getLatestNews() { EntityManagerFactory emf = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT); EntityManager em = emf.createEntityManager(); Query findLatestQuery = em.createNamedQuery("news.findLatest"); List<News> news = findLatestQuery.getResultList(); return news; } And the named query is in the News class: @Entity @Table(name="GLT_NEWS") @NamedQueries({ @NamedQuery(name="news.findLatest", query="from GLT_NEWS order by new_date_added limit 5 ") }) The handledLoadResult looks like this: private function handleLoadResult(ev:ResultEvent):void { newsList = ev.result as ArrayCollection; newsRecords = newsList.length; } Where: [Bindable] private var newsList:ArrayCollection = new ArrayCollection(); But when I try to trigger: loaderService.getLatestNews(); nothing happens, newsList is empty. Few things I need to point out: 1) as I said I didn't install mysql manually, but I'm using MAMP (yes, the server's running), could this cause some trouble? 2) I already have a "gladm" database and I have a "GLT_NEWS" table with all the fields, is this bad? Basically the question is how am I suppose to debug this thing so I can find the mistake I'm making? I know that loadData() is executed (did a trace()), but I have no idea what happens with loaderService.getLatestNews()... @EDIT: ok so I see I'm getting an error in the "fault handler" which says "Error: Client.Error.MessageSend - Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 404: url: 'http://localhost:8080/WebContent/messagebroker/amf' - "

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >