Search Results

Search found 587 results on 24 pages for 'seek'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Rewind request body stream

    - by Despertar
    I am re-implementing a request logger as Owin Middleware which logs the request url and body of all incoming requests. I am able to read the body, but if I do the body parameter in my controller is null. I'm guessing it's null because the stream position is at the end so there is nothing left to read when it tries to deserialize the body. I had a similar issue in a previous version of Web API but was able to set the Stream position back to 0. This particular stream throws a This stream does not support seek operations exception. In the most recent version of Web API 2.0 I could call Request.HttpContent.ReadAsStringAsync()inside my request logger, and the body would still arrive to the controller in tact. How can I rewind the stream after reading it? or How can I read the request body without consuming it? public class RequestLoggerMiddleware : OwinMiddleware { public RequestLoggerMiddleware(OwinMiddleware next) : base(next) { } public override Task Invoke(IOwinContext context) { return Task.Run(() => { string body = new StreamReader(context.Request.Body).ReadToEnd(); // log body context.Request.Body.Position = 0; // cannot set stream position back to 0 Console.WriteLine(context.Request.Body.CanSeek); // prints false this.Next.Invoke(context); }); } } public class SampleController : ApiController { public void Post(ModelClass body) { // body is now null if the middleware reads it } }

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Ipad MPMovieplayerController video loads but automatically pauses when played

    - by slayerIQ
    Hello I am trying to get the MPMovieplayerController to work. I load a video everything goes wel i even see the first frame but then it automatically pauses, if i press play it pauses again. In the simulator it works perfectly but on the ipad device it gives the problem. I can even seek through the video and i see the frame i seeked to but nothing plays. This is some output from the console: 2010-06-08 22:16:13.145 app[3089:207] Using two-stage rotation animation. To use the smoother single-stage animation, this application must remove two-stage method implementations. [Switching to thread 12803] warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.2 (7B367)/Symbols/System/Library/VideoDecoders/VCH263.videodecoder" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.2 (7B367)/Symbols/System/Library/VideoDecoders/H264H2.videodecoder" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.2 (7B367)/Symbols/System/Library/VideoDecoders/MP4VH2.videodecoder" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.2 (7B367)/Symbols/System/Library/VideoDecoders/JPEGH1.videodecoder" (file not found). 2010-06-08 22:16:15.145 app[3089:207] setting file:///private/var/mobile/Applications/46CE5456-6338-4BBF-A560-DCEFF700ACE0/tmp/MediaCache/ I dont get those warning when using the simulator BTW. Does anyone know how to fix this ?

    Read the article

  • Jmock mock DAO object

    - by Gandalf StormCrow
    Hi all, I wrote a method that retrieves certain list of strings, given a correct string key. Now when I create a list(the one to be retrieved by method descibed in previous sentence) and create test I can easily get results and test passes successfully. Now on the other hand if I save the content of this list to database in 2 columns, key and value I wrote a class which retrieves this items with method inside it. And when I print it out to console the expected results are correct, now I initialize my DAO from application context where inside its bean it gets session and because of DAO works. Now I'm trying to write a test which will mock the DAO, because I'm running test localy not on the server .. so I told jmock to mock it : private MyDAO myDAO; in the setup() myDAO = context.mock(MyDAO.class); I think I'm mocking it correctly or not, how can I mock this data from database? what is the best way? Is there somewhere good Jmock documentation? on their official site its not very good and clear, you have to know what you seek in order to find it, can't discover something cool in the mean time.

    Read the article

  • C file read leaves garbage characters

    - by KJ
    Hi. I'm trying to read the contents of a file into my program but I keep occasionally getting garbage characters at the end of the buffers. I haven't been using C a lot (rather I've been using C++) but I assume it has something to do with streams. I don't really know what to do though. I'm using MinGW. Here is the code (this gives me garbage at the end of the second read): include include char* filetobuf(char *file) { FILE *fptr; long length; char *buf; fptr = fopen(file, "r"); /* Open file for reading */ if (!fptr) /* Return NULL on failure */ return NULL; fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */ length = ftell(fptr); /* Find out how many bytes into the file we are */ buf = (char*)malloc(length+1); /* Allocate a buffer for the entire length of the file and a null terminator */ fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file */ fread(buf, length, 1, fptr); /* Read the contents of the file in to the buffer */ fclose(fptr); /* Close the file */ buf[length] = 0; /* Null terminator */ return buf; /* Return the buffer */ } int main() { char* vs; char* fs; vs = filetobuf("testshader.vs"); fs = filetobuf("testshader.fs"); printf("%s\n\n\n%s", vs, fs); free(vs); free(fs); return 0; } The filetobuf function is from this example http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_%28C_/_SDL%29. It seems right to me though. So anyway, what's up with that?

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • .NET EventHandlers - Generic or no?

    - by Chris Marasti-Georg
    Every time I start in deep in a C# project, I end up with lots of events that really just need to pass a single item. I stick with the EventHandler/EventArgs practice, but what I like to do is have something like: public delegate void EventHandler<T>(object src, EventArgs<T> args); public class EventArgs<T>: EventArgs { private T item; public EventArgs(T item) { this.item = item; } public T Item { get { return item; } } } Later, I can have my public event EventHandler<Foo> FooChanged; public event EventHandler<Bar> BarChanged; However, it seems that the standard for .NET is to create a new delegate and EventArgs subclass for each type of event. Is there something wrong with my generic approach? EDIT: The reason for this post is that I just re-created this in a new project, and wanted to make sure it was ok. Actually, I was re-creating it as I posted. I found that there is a generic EventHandler<TEventArgs, so you don't need to create the generic delegate, but you still need the generic EventArgs<T class, because TEventArgs: EventArgs. Another EDIT: One downside (to me) of the built-in solution is the extra verbosity: public event EventHandler<EventArgs<Foo>> FooChanged; vs. public event EventHandler<Foo> FooChanged; It can be a pain for clients to register for your events though, because the System namespace is imported by default, so they have to manually seek out your namespace, even with a fancy tool like Resharper... Anyone have any ideas pertaining to that?

    Read the article

  • custom view on iphone's native media player(MPMoviePlayerController)

    - by sneha
    I am building an application that implements a custom view on iPhone’s native media player. I want your help in deciding directions to lay this effort. At present I have find out that iPhone SDK doesn’t support APIs to customize media player. I need these things in the player: I would like to have custom views i.e. want to change all control buttons on player like Play/Pause, seek bar etc. The background of player will also need to be different. The player has to play audio or video file from local/remote location. Can i use MPMoviePlayerController if it can be customized (How to do it ??). However, any other third party player approved by iPhone which has an ability to download and play the media file from local/remote location is also fine. It will be great to have an access to media player buffer so that it can be encrypted. I have following questions: 1.Any help in building/customizing player..... 2.Do you see issues in signing of application? 3.Does Apple have any restrictions on customizing media player? 4.Any sample iPhone application where media player is customized Any help in this regard is highly appreciated.

    Read the article

  • Feasibility of using Silverlight for web and windows client with common code base for data intensive

    - by Kabeer
    Hello. Recently in a conversation, someone suggested me to make use of Silverlight if I am targeting a web client and a windows client for the same application. This will cut down my effort for supporting the contrast in both presentation layers. Mine is a product, that will be deployed in enterprises. Both web and windows clients are desirable. With the above context, I have few queries: Is it advisable to adopt the recommended approach and whether this approach is becoming a trend? Besides, some configuration & deployment tweaking, will this significantly reduce effort on the presentation layer? Is there a possibility that my future prospects (for this product) will resist Silverlight footprint? Will I be able to make use of the ASP.Net MVC pattern? Will there be any performance implication for the web client? Will Silverlight support incremental load of controls? If my back-end includes SSRS, will I be able to harness all its front end features with Silverlight? Will I be able to support additional devices with same code base in future? Mine is a very data intensive application from both, data entry and reporting perspective. Is it advisable to use 3rd party controls (like Telerik) for improved user experience and developer productivity? Are their any professional quality open source Silverlight controls (library) available? Further, I seek information of best practices in the context I shared above.

    Read the article

  • noSQL/SQL/RoR: Trying to build scalable ratings table for the game

    - by alexeypro
    I am trying to solve complex thing (as it looks to me). I have next entities: PLAYER (few of them, with names like "John", "Peter", etc.). Each has unique ID. For simplicity let's think it's their name. GAME (few of them, say named "Hide and Seek", "Jump and Run", etc.). Same - each has unique ID. For simplicity of the case let it be it's name for now. SCORE (it's numeric). So, how it works. Each PLAYER can play in multiple GAMES. He gets some SCORE in every GAME. I need to build rating table -- and not one! Table #1: most played GAMES Table #2: best PLAYERS in all games (say the total SCORE in every GAME). Table #3: best PLAYERS per GAME (by SCORE in particularly that GAME). I could be build something straight right away, but that will not work. I will have more than 10,000 players; and 15 games, which will grow for sure. Score can be as low as 0, and as high as 1,000,000 (not sure if higher is possible at this moment) for player in the game. So I really need some relative data. Any suggestions? I am planning to do it with SQL, but may be just using it for key-value storage; anything -- any ideas are welcome. Thank you!

    Read the article

  • 3D Web Sites and Applications

    - by Scott Evernden
    I have for the last several years been struggling to understand why the Internet has so few actually useful 3D web applications. It's 2009 and still everything looks like pages from a Sears catalog. You can turn on your TV and find flying logos every night. After that you can get nostalgic and flip on ol' N-64 and play some Zelda or Mario Kart. On the PC, Sims 2 is approaching 6 years old already.. And then there's WoW. Current generation of users - the Facebook crowd, let's say - has ~no~ problem dealing with multi-dimensional environments.. And yet, nothing really immersive seems to happen on the web. I've been hearing about VRML and X3D for at least 10 years and ... pffft .. - nothing earth shaking going on there. Java 3D ? .. cool ! .. but ...... Still .... waiting and waiting. Do you think it will take a killer-web app before people become accustomed-to or will seek to use what could more more engaging web experiences? I am not talking about Second Life and other dedicated downloaded applications. I probably am more focused on apps like Lively or SceneCaster or Hangout or a half dozen others that are delivered 'painlessly' directly into web pages. My own particular interest is in the domain of virtual stores and immersive shopping. Its been a challenge trying to understand why an average user would not want to browse and wander a changing mall-space - like in the real world -- entertained by unexpected discovery. Is the 3D web always going to be 5 years in the future ?

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Performance problem with System.Net.Mail

    - by Saif Khan
    I have this unusual problem with mailing from my app. At first it wasn't working (getting unable to relay error crap) anyways I added the proper authentication and it works. My problem now is, if I try to send around 300 emails (each with a 500k attachment) the app starts hanging around 95% thru the process. Here is some of my code which is called for each mail to be sent Using mail As New MailMessage() With mail .From = New MailAddress(My.Resources.EmailFrom) For Each contact As Contact In Contacts .To.Add(contact.Email) Next .Subject = "Accounting" .Body = My.Resources.EmailBody 'Back the stream up to the beginning orelse the attachment 'will be sent as a zero (0) byte file. attachment.Seek(0, SeekOrigin.Begin) .Attachments.Add(New Attachment(attachment, String.Concat(Item.Year, Item.AttachmentType.Extension))) End With Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") .Send(mail) End With End Using With item .SentStatus = True .DateSent = DateTime.Now.Date .Save() End With Return I was thinking, can I just prepare all the mails and add them to a collection then open one SMTP conenction and just iterate the collection, calling the send like this Using mail As New MailMessage() ... MailCollection.Add(mail) End Using ... Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") For Each mail in MainCollection .Send(mail) Next End With

    Read the article

  • Add Hexidecimal Header Info to JPEG File Using Java

    - by jboyd
    I need to add header info to a JPEG file in order to get it to work properly when shared on some websites, I've tracked down the correct info through a lot of Hex digging, but now I'm kind of stuck trying to get it into the file. I know where in the file it needs to go, and I know how long it is, my problem is that RandomAccessFile just overwrites existing data in the file and FileOutputStream appends the data to the end. I don't want either, I want to INSERT data starting at the third byte. My example code: File fileToChange = new File("someimage.jpg"); byte[] i = new byte[2]; i[0] = (byte)Integer.decode("0xcc"); i[1] = (byte)Integer.decode("0xcc"); RandomAccessFile f = new RandomAccessFile(new File("videothing.jpg"), "rw"); long aPositionWhereIWantToGo = 2; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write((byte[])i); f.close(); So this doesn't work because it overwrites, and does not insert, I can't find any way to just insert data into a file

    Read the article

  • Curve fitting: Find the smoothest function that satisfies a list of constraints.

    - by dreeves
    Consider the set of non-decreasing surjective (onto) functions from (-inf,inf) to [0,1]. (Typical CDFs satisfy this property.) In other words, for any real number x, 0 <= f(x) <= 1. The logistic function is perhaps the most well-known example. We are now given some constraints in the form of a list of x-values and for each x-value, a pair of y-values that the function must lie between. We can represent that as a list of {x,ymin,ymax} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: We now seek a curve that respects those constraints. For example: Let's first try a simple interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That function is not surjective. Also, we'd like it to be smoother. We can increase the interpolation order but now it violates the constraint that its range is [0,1]: The goal, then, is to find the smoothest function that satisfies the constraints: Non-decreasing. Tends to 0 as x approaches negative infinity and tends to 1 as x approaches infinity. Passes through a given list of y-error-bars. The first example I plotted above seems to be a good candidate but I did that with Mathematica's FindFit function assuming a lognormal CDF. That works well in this specific example but in general there need not be a lognormal CDF that satisfies the constraints.

    Read the article

  • How to analyze the efficiency of this algorithm Part 2

    - by Leonardo Lopez
    I found an error in the way I explained this question before, so here it goes again: FUNCTION SEEK(A,X) 1. FOUND = FALSE 2. K = 1 3. WHILE (NOT FOUND) AND (K < N) a. IF (A[K] = X THEN 1. FOUND = TRUE b. ELSE 1. K = K + 1 4. RETURN Analyzing this algorithm (pseudocode), I can count the number of steps it takes to finish, and analyze its efficiency in theta notation, T(n), a linear algorithm. OK. This following code depends on the inner formulas inside the loop in order to finish, the deal is that there is no variable N in the code, therefore the efficiency of this algorithm will always be the same since we're assigning the value of 1 to both A & B variables: 1. A = 1 2. B = 1 3. UNTIL (B > 100) a. B = 2A - 2 b. A = A + 3 Now I believe this algorithm performs in constant time, always. But how can I use Algebra in order to find out how many steps it takes to finish?

    Read the article

  • Crystal Reports - export to pdf in MVC

    - by BhejaFry
    Hi folks, I have integrated the below code in my application to generate a 'pdf' file using crystal reports in MVC project. However, after the request is processed, i get to see only 2 pages in the pdf file while my 'data' returns more than 2 records. Also, the pdf isn't rendered as soon as the page is processed but instead i have to refresh atleast once, then the pdf is rendered on the browser. using CrystalDecisions.CrystalReports.Engine; public FileStreamResult Report() { ReportClass rptH = new ReportClass(); List<sampledataset> data = objdb.getdataset(); rptH.FileName = Server.MapPath("[reportName].rpt"); rptH.Load(); rptH.SetDatabaseLogon("un", "pwd", "server", "db"); rptH.SetDataSource(data); Stream stream = rptH.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); stream.Seek(0, System.IO.SeekOrigin.Begin); return new FileStreamResult(stream, "application/pdf"); } I took the code from here in SO but modified it like above. TIA.

    Read the article

  • MYSQL Merging 2 results into 1 table

    - by AlphaRomeo69
    Im building a c# program and am currently stuck at fetching data from MYSQL database and bind them to a grid view. I had been researching for a few days now but to no avail. I have 4 table in the database. table 1 - alpha table 2 - bravo table 3 - charlie table 4 - delta attributes of alpha (id, type, user, role ) attributes of bravo (id, type, date, user) attributes of charlie (id,type, cat, doneby, comment) atttibutes of delta (id,type, cat, doneby) * the pk of alpha and bravo is (id) * the pk of charlie and delta is (id, type) i did a query1 before by inner joinning alpha, bravo and charlie which leads to the sucessful result of (id, type, date, user, role, cat, doneby, comment) and i also did a query2 before by inner joinning alpha, bravo and delta which leads to the sucessful result of (id, type, date, user, role, cat, doneby) Right now, im trying to built a query3 which will merge the result from query1 and query2 together. the result of my attempts leads to (id, type, date, user, role, cat, doneby, comment,id, type, date, user, role, cat, doneby) As i do not want the repeated columns, I would like to seek advice on how to get the result to become like the one below by placing the records as a new tuple in the result table. (id, type, date, user, role, cat, doneby, comment) Thanks! P.S: the PK would not pose a problem due to (id, type)

    Read the article

  • Is there anyway to close a StreamWriter without closing it's BaseStream?

    - by Binary Worrier
    My root problem is that when using calls Dispose on a StreamWriter, it also disposes the BaseStream (same problem with Close). I have a workaround for this, but as you can see it involves copying the stream. Is there any way to do this without copying the stream? The purpose of this is to get the contents of a string (originally read from a database) into a stream, so the stream can be read by a third party component. NB I cannot change the third party component. public System.IO.Stream CreateStream(string value) { var baseStream = new System.IO.MemoryStream(); var baseCopy = new System.IO.MemoryStream(); using (var writer = new System.IO.StreamWriter(baseStream, System.Text.Encoding.UTF8)) { writer.Write(value); writer.Flush(); baseStream.WriteTo(baseCopy); } baseCopy.Seek(0, System.IO.SeekOrigin.Begin); return baseCopy; } Used as public void Noddy() { System.IO.Stream myStream = CreateStream("The contents of this string are unimportant"); My3rdPartyComponent.ReadFromStream(myStream); } Ideally I'm looking for an imaginery method called BreakAssociationWithBaseStream, e.g. public System.IO.Stream CreateStream_Alternate(string value) { var baseStream = new System.IO.MemoryStream(); using (var writer = new System.IO.StreamWriter(baseStream, System.Text.Encoding.UTF8)) { writer.Write(value); writer.Flush(); writer.BreakAssociationWithBaseStream(); } return baseStream; }

    Read the article

  • jQuery Validation error...

    - by Povylas
    Hi, I have been struggling with this jQuery Validation Plugin. Here is the code: <script type="text/javascript"> $(function() { var validator = $('#signup').validate({ errorElement: 'span', rules: { username: { required: true, minlenght: 6 //remote: "check-username.php" }, password: { required: true, minlength: 5 }, confirm_password: { required: true, minlength: 5, equalTo: "#password" }, email: { required: true, email: true }, agree: "required" }, messages: { username: { required: "Please enter a username", minlength: "Your username must consist of at least 6 characters" //remote: "Somenoe have already chosen nick like this." }, password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long" }, confirm_password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long", equalTo: "Please enter the same password as above" }, email: "Please enter a valid email address", agree: "Please accept our policy" } }); var root = $("#wizard").scrollable({size: 1, clickable: false}); // some variables that we need var api = root.scrollable(); $("#data").click(function() { validator.form(); }); // validation logic is done inside the onBeforeSeek callback api.onBeforeSeek(function(event, i) { if($("#signup").valid() == false){ return false; }else{ return true; } $("#status li").removeClass("active").eq(i).addClass("active"); }); //if tab is pressed on the next button seek to next page root.find("button.next").keydown(function(e) { if (e.keyCode == 9) { // seeks to next tab by executing our validation routine api.next(); e.preventDefault(); } }); $('button.fin').click(function(){ parent.$.fn.fancybox.close() }); }); </script> And here is the error: $.validator.methods[method] is undefined http://www.vvv.vhost.lt/js/jquery-validate/jquery.validate.min.js Line 15 I am completely confused... Maybe some kind of handler is needed? I would be grateful for any kind of answer.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Speed up SQL Server Fulltext Index through Text Duplication of Non-Indexed Columns

    - by Alex
    1) I have the text fields FirstName, LastName, and City. They are fulltext indexed. 2) I also have the FK int fields AuthorId and EditorId, not fulltext indexed. A search on FirstName = 'abc' AND AuthorId = 1 will first search the entire fulltext index for 'abc', and then narrow the resultset for AuthorId = 1. This is bad because it is a huge waste of resources as the fulltext search will be performed on many records that won't be applicable. Unfortunately, to my knowledge, this can't be turned around (narrow by AuthorId first and then fulltext-search the subset that matches) because the FTS process is separate from SQL Server. Now my proposed solution that I seek feedback on: Does it make sense to create another computed column which will be included in the fulltext search which will identify the Author as text (e.g. AUTHORONE). That way I could get rid of the AuthorId restriction, and instead make it part of my fulltext search (a search for 'abc' would be 'abc' and 'AUTHORONE' - all executed as part of the fulltext search). Is this a good idea or not? Why?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >